threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nWould it be possible to include the user (who changed the row) in the logical replication data?\n\nBest Reagrds\nTobias\n\n", "msg_date": "Wed, 11 Mar 2020 17:18:24 +0100", "msg_from": "Tobias Stadler <ts.stadler@gmx.de>", "msg_from_op": true, "msg_subject": "User and Logical Replication" }, { "msg_contents": "On 2020-03-11 17:18, Tobias Stadler wrote:\n> Would it be possible to include the user (who changed the row) in the logical replication data?\n\nNot without major re-engineering.\n\nIf you need this information, maybe a BEFORE INSERT OR UPDATE trigger \ncould be used to write this information into a table column.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 11 Mar 2020 19:01:45 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: User and Logical Replication" }, { "msg_contents": "Tanks for the Info.\n\n> Am 11.03.2020 um 19:01 schrieb Peter Eisentraut <peter.eisentraut@2ndquadrant.com>:\n> \n> On 2020-03-11 17:18, Tobias Stadler wrote:\n>> Would it be possible to include the user (who changed the row) in the logical replication data?\n> \n> Not without major re-engineering.\n> \n> If you need this information, maybe a BEFORE INSERT OR UPDATE trigger could be used to write this information into a table column.\n> \n> -- \n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 12 Mar 2020 07:45:30 +0100", "msg_from": "Tobias Stadler <ts.stadler@gmx.de>", "msg_from_op": true, "msg_subject": "Re: User and Logical Replication" } ]
[ { "msg_contents": "Hi,?\n\nI'm getting build error while building latest snapshot. Any idea why? Please\nnote that I'm adding this patch to the tarball:\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD\n\n+ ./configure --enable-rpath --prefix=/usr/pgsql-13 --includedir=/usr/pgsql-\n13/include --libdir=/usr/pgsql-13/lib --mandir=/usr/pgsql-13/share/man --\ndatadir=/usr/pgsql-13/share --with-icu --with-llvm --with-perl --with-python --\nwith-tcl --with-tclconfig=/usr/lib64 --with-openssl --with-pam --with-gssapi --\nwith-includes=/usr/include --with-libraries=/usr/lib64 --enable-nls --enable-\ndtrace --with-uuid=e2fs --with-libxml --with-libxslt --with-ldap --with-selinux \n--with-systemd --with-system-tzdata=/usr/share/zoneinfo --\nsysconfdir=/etc/sysconfig/pgsql --docdir=/usr/pgsql-13/doc --\nhtmldir=/usr/pgsql-13/doc/html\n+ MAKELEVEL=0\n+ /usr/bin/make -j4 all\nIn file included from ../../../../src/include/c.h:55,\n from ../../../../src/include/postgres.h:46,\n from guc.c:17:\n../../../../src/include/pg_config_manual.h:200:31: error: called object is not\na function or function pointer\n #define DEFAULT_PGSOCKET_DIR \"/var/run/postgresql\"\n ^~~~~~~~~~~~~~~~~~~~~\nguc.c:4064:3: note: in expansion of macro 'DEFAULT_PGSOCKET_DIR'\n DEFAULT_PGSOCKET_DIR \", /tmp\"\n ^~~~~~~~~~~~~~~~~~~~\nmake[4]: *** [<builtin>: guc.o] Error 1\nmake[3]: *** [../../../src/backend/common.mk:39: misc-recursive] Error 2\nmake[3]: *** Waiting for unfinished jobs....\nmake[2]: *** [common.mk:39: utils-recursive] Error 2\nmake[1]: *** [Makefile:42: all-backend-recurse] Error 2\nmake: *** [GNUmakefile:11: all-src-recurse] Error 2\n\n\nRegards,\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Wed, 11 Mar 2020 16:53:51 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "v13 latest snapshot build error" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> I'm getting build error while building latest snapshot. Any idea why? Please\n> note that I'm adding this patch to the tarball:\n> https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD\n\n(Hey, I recognize that patch ...)\n\n> In file included from ../../../../src/include/c.h:55,\n> from ../../../../src/include/postgres.h:46,\n> from guc.c:17:\n> ../../../../src/include/pg_config_manual.h:200:31: error: called object is not\n> a function or function pointer\n> #define DEFAULT_PGSOCKET_DIR \"/var/run/postgresql\"\n> ^~~~~~~~~~~~~~~~~~~~~\n> guc.c:4064:3: note: in expansion of macro 'DEFAULT_PGSOCKET_DIR'\n> DEFAULT_PGSOCKET_DIR \", /tmp\"\n> ^~~~~~~~~~~~~~~~~~~~\n> make[4]: *** [<builtin>: guc.o] Error 1\n\nThat is just weird. Could it be a compiler bug? I assume you're\nusing some bleeding-edge gcc version, and it's really hard to see\nanother reason why this would fail, especially with a nonsensical\nerror like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 15:44:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "Hi,\n\n(Sorry for top posting)\n\nThis happens on RHEL 8. I don't think it's that bleeding edge.\n\nRegards, Devrim\n\nOn 11 March 2020 19:44:55 GMT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n>> I'm getting build error while building latest snapshot. Any idea why?\n>Please\n>> note that I'm adding this patch to the tarball:\n>>\n>https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD\n>\n>(Hey, I recognize that patch ...)\n>\n>> In file included from ../../../../src/include/c.h:55,\n>> from ../../../../src/include/postgres.h:46,\n>> from guc.c:17:\n>> ../../../../src/include/pg_config_manual.h:200:31: error: called\n>object is not\n>> a function or function pointer\n>> #define DEFAULT_PGSOCKET_DIR \"/var/run/postgresql\"\n>> ^~~~~~~~~~~~~~~~~~~~~\n>> guc.c:4064:3: note: in expansion of macro 'DEFAULT_PGSOCKET_DIR'\n>> DEFAULT_PGSOCKET_DIR \", /tmp\"\n>> ^~~~~~~~~~~~~~~~~~~~\n>> make[4]: *** [<builtin>: guc.o] Error 1\n>\n>That is just weird. Could it be a compiler bug? I assume you're\n>using some bleeding-edge gcc version, and it's really hard to see\n>another reason why this would fail, especially with a nonsensical\n>error like that.\n>\n>\t\t\tregards, tom lane\n\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\nHi,(Sorry for top posting)This happens on RHEL 8. I don't think it's that bleeding edge.Regards, DevrimOn 11 March 2020 19:44:55 GMT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\nDevrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:I'm getting build error while building latest snapshot. Any idea why? Pleasenote that I'm adding this patch to the tarball:https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD(Hey, I recognize that patch ...)In file included from ../../../../src/include/c.h:55, from ../../../../src/include/postgres.h:46, from guc.c:17:../../../../src/include/pg_config_manual.h:200:31: error: called object is nota function or function pointer #define DEFAULT_PGSOCKET_DIR \"/var/run/postgresql\" ^~~~~~~~~~~~~~~~~~~~~guc.c:4064:3: note: in expansion of macro 'DEFAULT_PGSOCKET_DIR' DEFAULT_PGSOCKET_DIR \", /tmp\" ^~~~~~~~~~~~~~~~~~~~make[4]: *** [<builtin>: guc.o] Error 1That is just weird. Could it be a compiler bug? I assume you'reusing some bleeding-edge gcc version, and it's really hard to seeanother reason why this would fail, especially with a nonsensicalerror like that.\t\t\tregards, tom lane-- Sent from my Android device with K-9 Mail. Please excuse my brevity.", "msg_date": "Wed, 11 Mar 2020 21:07:32 +0000", "msg_from": "Devrim Gunduz <devrim@gunduz.org>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "Hi Tom,\n\n[Adding Christoph]\n\nOn Wed, 2020-03-11 at 21:07 +0000, Devrim Gunduz wrote:\n> This happens on RHEL 8. I don't think it's that bleeding edge.\n\nMorever, my RHEL 7 build are broken as well:\n\n+ MAKELEVEL=0\n+ /usr/bin/make -j4 all\nIn file included from /usr/include/time.h:37:0,\n from ../../../../src/include/portability/instr_time.h:64,\n from ../../../../src/include/executor/instrument.h:16,\n from ../../../../src/include/nodes/execnodes.h:18,\n from ../../../../src/include/executor/execdesc.h:18,\n from ../../../../src/include/executor/executor.h:17,\n from ../../../../src/include/commands/explain.h:16,\n from ../../../../src/include/commands/prepare.h:16,\n from guc.c:40:\nguc.c:4068:3: error: called object is not a function or function pointer\n NULL, NULL, NULL\n ^\nmake[4]: *** [guc.o] Error 1\nmake[3]: *** [misc-recursive] Error 2\nmake[3]: *** Waiting for unfinished jobs....\nmake[2]: *** [utils-recursive] Error 2\nmake[1]: *** [all-backend-recurse] Error 2\nmake: *** [all-src-recurse] Error 2\nerror: Bad exit status from /var/tmp/rpm-tmp.PXbYKE (%build)\n Bad exit status from /var/tmp/rpm-tmp.PXbYKE (%build)\n\nIDK what is going on, but apparently something is broken somewhere.\n\nCheers,\n--\nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Thu, 12 Mar 2020 00:09:05 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n> On Wed, 2020-03-11 at 21:07 +0000, Devrim Gunduz wrote:\n>> This happens on RHEL 8. I don't think it's that bleeding edge.\n\n> Morever, my RHEL 7 build are broken as well:\n\nHm. We have various RHEL and CentOS 7.x machines in the buildfarm,\nand they aren't unhappy. So seems like it must be something specific\nto your build. Are you carrying any other patches?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 20:34:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "Hello,\n\nOn 3/12/2020 4:44 AM, Tom Lane wrote:\n> Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org> writes:\n>> I'm getting build error while building latest snapshot. Any idea why? Please\n>> note that I'm adding this patch to the tarball:\n>> https://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD\n> \n> (Hey, I recognize that patch ...)\n> \n>> In file included from ../../../../src/include/c.h:55,\n>> from ../../../../src/include/postgres.h:46,\n>> from guc.c:17:\n>> ../../../../src/include/pg_config_manual.h:200:31: error: called object is not\n>> a function or function pointer\n>> #define DEFAULT_PGSOCKET_DIR \"/var/run/postgresql\"\n>> ^~~~~~~~~~~~~~~~~~~~~\n>> guc.c:4064:3: note: in expansion of macro 'DEFAULT_PGSOCKET_DIR'\n>> DEFAULT_PGSOCKET_DIR \", /tmp\"\n>> ^~~~~~~~~~~~~~~~~~~~\n>> make[4]: *** [<builtin>: guc.o] Error 1\n> \n> That is just weird. Could it be a compiler bug? I assume you're\n> using some bleeding-edge gcc version, and it's really hard to see\n> another reason why this would fail, especially with a nonsensical\n> error like that.\n\nI'm not familiar with the patch itself. But I think there is just a lack \nof the comma here, after \", /tmp\" :-)\n\n> --- a/src/backend/utils/misc/guc.c\n> +++ b/src/backend/utils/misc/guc.c\n> @@ -4061,7 +4061,7 @@ static struct config_string ConfigureNamesString[] =\n> },\n> &Unix_socket_directories,\n> #ifdef HAVE_UNIX_SOCKETS\n> - DEFAULT_PGSOCKET_DIR,\n> + DEFAULT_PGSOCKET_DIR \", /tmp\"\n> #else\n> \"\",\n> #endif\n\nSo, it should be:\n > #ifdef HAVE_UNIX_SOCKETS\n> - DEFAULT_PGSOCKET_DIR,\n> + DEFAULT_PGSOCKET_DIR \", /tmp\",\n> #else\n\n-- \nArtur\n\n\n", "msg_date": "Thu, 12 Mar 2020 11:26:21 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "Artur Zakirov <zaartur@gmail.com> writes:\n> I'm not familiar with the patch itself. But I think there is just a lack \n> of the comma here, after \", /tmp\" :-)\n\n[ blink... ] There definitely is a comma there in the version of the\npatch that's in the Fedora repo.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Mar 2020 22:52:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "On 3/12/2020 11:52 AM, Tom Lane wrote:\n> Artur Zakirov <zaartur@gmail.com> writes:\n>> I'm not familiar with the patch itself. But I think there is just a lack\n>> of the comma here, after \", /tmp\" :-)\n> \n> [ blink... ] There definitely is a comma there in the version of the\n> patch that's in the Fedora repo.\n\nAh, I see. I just saw the version Devrim sent (not sure that it is used \nin the Fedora repo):\n\nhttps://git.postgresql.org/gitweb/?p=pgrpms.git;a=blob;f=rpm/redhat/master/postgresql-13/master/postgresql-13-var-run-socket.patch;h=a0292a80ae219b4c8dc1c2e686a3521f02b4330d;hb=HEAD\n\nAnd I thought there should be a comma at the end of the line after \nconcatenation to avoid concatenation with NULL:\n\nDEFAULT_PGSOCKET_DIR \", /tmp\"\n\n> {\n> {\"unix_socket_directories\", PGC_POSTMASTER, CONN_AUTH_SETTINGS,\n> gettext_noop(\"Sets the directories where Unix-domain sockets will be created.\"),\n> NULL,\n> GUC_SUPERUSER_ONLY\n> },\n> &Unix_socket_directories,\n> #ifdef HAVE_UNIX_SOCKETS\n> DEFAULT_PGSOCKET_DIR \", /tmp\"\n> #else\n> \"\",\n> #endif\n> NULL, NULL, NULL\n> },\n\n\n\n-- \nArtur\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:02:43 +0900", "msg_from": "Artur Zakirov <zaartur@gmail.com>", "msg_from_op": false, "msg_subject": "Re: v13 latest snapshot build error" }, { "msg_contents": "On Wed, 2020-03-11 at 22:52 -0400, Tom Lane wrote:\n> Artur Zakirov <zaartur@gmail.com> writes:\n> > I'm not familiar with the patch itself. But I think there is just a lack \n> > of the comma here, after \", /tmp\" :-)\n> \n> [ blink... ] There definitely is a comma there in the version of the\n> patch that's in the Fedora repo.\n\n\nExactly.Looks like I broke the patch while trying to update the patch.\n\nSorry for the noise.\n\nRegards,\n\n-- \nDevrim Gündüz\nOpen Source Solution Architect, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Mon, 16 Mar 2020 11:34:08 +0000", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <devrim@gunduz.org>", "msg_from_op": true, "msg_subject": "Re: v13 latest snapshot build error" } ]
[ { "msg_contents": "Hello hackers,\r\n\r\nI found an issue about get_bit() and set_bit() function,here it is:\r\n################################\r\npostgres=# select get_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0);\r\n2020-03-12 10:05:23.296 CST [10549] ERROR: index 0 out of valid range, 0..-1\r\n2020-03-12 10:05:23.296 CST [10549] STATEMENT: select get_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0);\r\nERROR: index 0 out of valid range, 0..-1\r\npostgres=# select set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1);\r\n2020-03-12 10:05:27.959 CST [10549] ERROR: index 0 out of valid range, 0..-1\r\n2020-03-12 10:05:27.959 CST [10549] STATEMENT: select set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1);\r\nERROR: index 0 out of valid range, 0..-1\r\npostgres=#\r\n################################\r\nPostgreSQL can handle bytea size nearby 1G, but now it reports an\r\nerror when 512M. And I research it and found it is byteaSetBit() and\r\nbyteaGetBit(), it uses an 'int32 len' to hold bit numbers for the long\r\nbytea data, and obvious 512M * 8bit is an overflow for an int32. \r\nSo I fix it and test ok, as below.\r\n################################\r\npostgres=# select get_bit(set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1),0); get_bit --------- 1 (1 row) postgres=# select get_bit(set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,0),0); get_bit --------- 0 (1 row) postgres=#\r\n################################\r\n\r\n\r\nAnd I do a check about if anything else related bytea has this issue, several codes have the same issue:\r\n1. byteaout() When formatting bytea as an escape, the 'len' variable should be int64, or\r\nit may use an overflowing number. 2. esc_enc_len() Same as above, the 'len' variable should be int64, and the return type\r\nshould change as int64. Due to esc_enc_len() has same call struct with pg_base64_enc_len() and hex_enc_len(), so I want to change the return value of the two function. And the opposite function esc_dec_len() seem nothing wrong. 3. binary_encode() and binary_decode() Here use an 'int32 resultlen' to accept an 'unsigned int' function return, which seem unfortable.\r\nI fix all mentioned above, and patch attachments.\r\nHow do you think about that?\r\n\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 12 Mar 2020 11:51:38 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Hi\n\nOn Thu, Mar 12, 2020 at 9:21 AM movead.li@highgo.ca <movead.li@highgo.ca> wrote:\n>\n> Hello hackers,\n>\n> I found an issue about get_bit() and set_bit() function,here it is:\n> ################################\n> postgres=# select get_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0);\n> 2020-03-12 10:05:23.296 CST [10549] ERROR: index 0 out of valid range, 0..-1\n> 2020-03-12 10:05:23.296 CST [10549] STATEMENT: select get_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0);\n> ERROR: index 0 out of valid range, 0..-1\n> postgres=# select set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1);\n> 2020-03-12 10:05:27.959 CST [10549] ERROR: index 0 out of valid range, 0..-1\n> 2020-03-12 10:05:27.959 CST [10549] STATEMENT: select set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1);\n> ERROR: index 0 out of valid range, 0..-1\n> postgres=#\n> ################################\n> PostgreSQL can handle bytea size nearby 1G, but now it reports an\n> error when 512M. And I research it and found it is byteaSetBit() and\n> byteaGetBit(), it uses an 'int32 len' to hold bit numbers for the long\n> bytea data, and obvious 512M * 8bit is an overflow for an int32.\n> So I fix it and test ok, as below.\n> ################################\n\nThanks for the bug report and the analysis. The analysis looks correct.\n\n> postgres=# select get_bit(set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,1),0); get_bit --------- 1 (1 row) postgres=# select get_bit(set_bit(pg_read_binary_file('/home/movead/temp/file_seek/f_512M'), 0,0),0); get_bit --------- 0 (1 row) postgres=#\n> ################################\n>\n>\n> And I do a check about if anything else related bytea has this issue, several codes have the same issue:\n> 1. byteaout() When formatting bytea as an escape, the 'len' variable should be int64, or\n> it may use an overflowing number. 2. esc_enc_len() Same as above, the 'len' variable should be int64, and the return type\n> should change as int64. Due to esc_enc_len() has same call struct with pg_base64_enc_len() and hex_enc_len(), so I want to change the return value of the two function. And the opposite function esc_dec_len() seem nothing wrong. 3. binary_encode() and binary_decode() Here use an 'int32 resultlen' to accept an 'unsigned int' function return, which seem unfortable.\n> I fix all mentioned above, and patch attachments.\n> How do you think about that?\n\nWhy have you used size? Shouldn't we use int64?\n\nAlso in the change\n@@ -3458,15 +3458,15 @@ byteaGetBit(PG_FUNCTION_ARGS)\n int32 n = PG_GETARG_INT32(1);\n int byteNo,\n bitNo;\n- int len;\n+ Size len;\n\nIf get_bit()/set_bit() accept the second argument as int32, it can not\nbe used to set bits whose number does not fit 32 bits. I think we need\nto change the type of the second argument as well.\n\nAlso, I think declaring len to be int is fine since 1G would fit an\nint, but what does not fit is len * 8, when performing that\ncalculation, we have to widen the result. So, instead of changing the\ndatatype of len, it might be better to perform the calculation as\n(int64)len * 8. If we use int64, we could also use INT64_FORMAT\ninstead of using %ld.\n\nSince this is a bug it shouldn't wait another commitfest, but still\nadd this patch to the commitfest so that it's not forgotten.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 12 Mar 2020 21:49:25 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Thanks for the reply.\r\n\r\n>Why have you used size? Shouldn't we use int64?\r\nYes, thanks for the point, I have changed the patch.\r\n \r\n>If get_bit()/set_bit() accept the second argument as int32, it can not\r\n>be used to set bits whose number does not fit 32 bits. I think we need\r\n>to change the type of the second argument as well.\r\nBecause int32 can cover the length of bytea that PostgreSQL support,\r\nand I have decided to follow your next point 'not use 64bit int for len',\r\nso I think the second argument can keep int32.\r\n\r\n>Also, I think declaring len to be int is fine since 1G would fit an\r\n>int, but what does not fit is len * 8, when performing that\r\n>calculation, we have to widen the result. So, instead of changing the\r\n>datatype of len, it might be better to perform the calculation as\r\n>(int64)len * 8. If we use int64, we could also use INT64_FORMAT\r\n>instead of using %ld.\r\nHave followed and changed the patch.\r\n \r\n>Since this is a bug it shouldn't wait another commitfest, but still\r\n>add this patch to the commitfest so that it's not forgotten.\r\nWill do.\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 13 Mar 2020 11:18:47 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "On Fri, 13 Mar 2020 at 08:48, movead.li@highgo.ca <movead.li@highgo.ca>\nwrote:\n\n> Thanks for the reply.\n>\n> >Why have you used size? Shouldn't we use int64?\n> Yes, thanks for the point, I have changed the patch.\n>\n>\n\nThanks for the patch.\n\n\n> >If get_bit()/set_bit() accept the second argument as int32, it can not\n> >be used to set bits whose number does not fit 32 bits. I think we need\n> >to change the type of the second argument as well.\n> Because int32 can cover the length of bytea that PostgreSQL support,\n>\n\nI think the second argument indicates the bit position, which would be max\nbytea length * 8. If max bytea length covers whole int32, the second\nargument needs to be wider i.e. int64.\n\nSome more comments on the patch\n struct pg_encoding\n {\n- unsigned (*encode_len) (const char *data, unsigned dlen);\n+ int64 (*encode_len) (const char *data, unsigned dlen);\n unsigned (*decode_len) (const char *data, unsigned dlen);\n unsigned (*encode) (const char *data, unsigned dlen, char *res);\n unsigned (*decode) (const char *data, unsigned dlen, char *res);\n\nWhy not use return type of int64 for rest of the functions here as well?\n\n res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\n\n /* Make this FATAL 'cause we've trodden on memory ... */\n- if (res > resultlen)\n+ if ((int64)res > resultlen)\n\nif we change return type of all those functions to int64, we won't need\nthis cast.\n\nRight now we are using int64 because bytea can be 1GB, but what if we\nincrease\nthat limit tomorrow, will int64 be sufficient? That may be unlikely in the\nnear\nfuture, but nevertheless a possibility. Should we then define a new datatype\nwhich resolves to int64 right now but really depends upon the bytea length.\nI\nam not suggesting that we have to do it right now, but may be something to\nthink about.\n\n hex_enc_len(const char *src, unsigned srclen)\n {\n- return srclen << 1;\n+ return (int64)(srclen << 1);\n\nwhy not to convert srclen also to int64. That will also change the\npg_encoding\nmember definitions. But if encoded length requires int64 to fit the possibly\nvalues, same would be true for the source lengths. Why can't the source\nlength\nalso be int64?\n\nIf still we want the cast, I think it should be ((int64)srclen << 1) rather\nthan casting the result.\n\n /* 3 bytes will be converted to 4, linefeed after 76 chars */\n- return (srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4);\n+ return (int64)((srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4));\nsimilar comments as above.\n\n SELECT set_bit(B'0101011000100100', 16, 1); -- fail\n ERROR: bit index 16 out of valid range (0..15)\n+SELECT get_bit(\n+ set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 0, 0)\n+ ,0);\n+ get_bit\n+---------\n+ 0\n+(1 row)\n\nIt might help to add a test where we could pass the second argument\nsomething\ngreater than 1G. But it may be difficult to write such a test case.\n\n-- \nBest Wishes,\nAshutosh\n\nOn Fri, 13 Mar 2020 at 08:48, movead.li@highgo.ca <movead.li@highgo.ca> wrote:\nThanks for the reply.>Why have you used size? Shouldn't we use int64?Yes, thanks for the point, I have changed the patch. Thanks for the patch. >If get_bit()/set_bit() accept the second argument as int32, it can not>be used to set bits whose number does not fit 32 bits. I think we need>to change the type of the second argument as well.Because int32 can cover the length of bytea that PostgreSQL support,I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument needs to be wider i.e. int64.Some more comments on the patch struct pg_encoding {-\tunsigned\t(*encode_len) (const char *data, unsigned dlen);+\tint64\t\t(*encode_len) (const char *data, unsigned dlen); \tunsigned\t(*decode_len) (const char *data, unsigned dlen); \tunsigned\t(*encode) (const char *data, unsigned dlen, char *res); \tunsigned\t(*decode) (const char *data, unsigned dlen, char *res);Why not use return type of int64 for rest of the functions here as well? \tres = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));  \t/* Make this FATAL 'cause we've trodden on memory ... */-\tif (res > resultlen)+\tif ((int64)res > resultlen)if we change return type of all those functions to int64, we won't need this cast.Right now we are using int64 because bytea can be 1GB, but what if we increasethat limit tomorrow, will int64 be sufficient? That may be unlikely in the nearfuture, but nevertheless a possibility. Should we then define a new datatypewhich resolves to int64 right now but really depends upon the bytea length. Iam not suggesting that we have to do it right now, but may be something tothink about.  hex_enc_len(const char *src, unsigned srclen) {-\treturn srclen << 1;+\treturn (int64)(srclen << 1);why not to convert srclen also to int64. That will also change the pg_encodingmember definitions. But if encoded length requires int64 to fit the possiblyvalues, same would be true for the source lengths. Why can't the source lengthalso be int64?If still we want the cast, I think it should be ((int64)srclen << 1) ratherthan casting the result.  \t/* 3 bytes will be converted to 4, linefeed after 76 chars */-\treturn (srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4);+\treturn (int64)((srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4));similar comments as above.  SELECT set_bit(B'0101011000100100', 16, 1);\t-- fail ERROR:  bit index 16 out of valid range (0..15)+SELECT get_bit(+       set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 0, 0)+       ,0);+ get_bit +---------+       0+(1 row)It might help to add a test where we could pass the second argument somethinggreater than 1G. But it may be difficult to write such a test case.-- Best Wishes,Ashutosh", "msg_date": "Mon, 16 Mar 2020 21:58:24 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Hello thanks for the detailed review,\r\n\r\n>I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.\r\nYes, it makes sence and followed.\r\n\r\n> Some more comments on the patch\r\n> struct pg_encoding\r\n> {\r\n>- unsigned (*encode_len) (const char *data, unsigned dlen);\r\n>+ int64 (*encode_len) (const char *data, unsigned dlen);\r\n> unsigned (*decode_len) (const char *data, unsigned dlen);\r\n> unsigned (*encode) (const char *data, unsigned dlen, char *res);\r\n> unsigned (*decode) (const char *data, unsigned dlen, char *res);\r\n> Why not use return type of int64 for rest of the functions here as well?\r\n> res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\r\n> /* Make this FATAL 'cause we've trodden on memory ... */\r\n>- if (res > resultlen)\r\n>+ if ((int64)res > resultlen)\r\n>\r\n>if we change return type of all those functions to int64, we won't need this cast.\r\nI change the 'encode' function, it needs an int64 return type, but keep other\r\nfunctions in 'pg_encoding', because I think it of no necessary reason.\r\n\r\n>Right now we are using int64 because bytea can be 1GB, but what if we increase\r\n>that limit tomorrow, will int64 be sufficient? That may be unlikely in the near\r\n>future, but nevertheless a possibility. Should we then define a new datatype\r\n>which resolves to int64 right now but really depends upon the bytea length. I\r\n>am not suggesting that we have to do it right now, but may be something to\r\n>think about.\r\nI decide to use int64 because if we really want to increase the limit, it should be\r\nthe same change with 'text', 'varchar' which have the same limit. So it may need\r\na more general struct. Hence I give up the idea.\r\n\r\n> hex_enc_len(const char *src, unsigned srclen)\r\n> {\r\n>- return srclen << 1;\r\n>+ return (int64)(srclen << 1);\r\n>\r\n>why not to convert srclen also to int64. That will also change the pg_encoding\r\n>member definitions. But if encoded length requires int64 to fit the possibly\r\n>values, same would be true for the source lengths. Why can't the source length\r\n>also be int64?\r\n>If still we want the cast, I think it should be ((int64)srclen << 1) rather\r\n>than casting the result.\r\nI prefer the '((int64)srclen << 1)' way.\r\n\r\n> /* 3 bytes will be converted to 4, linefeed after 76 chars */\r\n>- return (srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4);\r\n>+ return (int64)((srclen + 2) * 4 / 3 + srclen / (76 * 3 / 4));\r\n>similar comments as above.\r\n Followed.\r\n\r\n\r\n>It might help to add a test where we could pass the second argument something\r\n>greater than 1G. But it may be difficult to write such a test case.\r\nAdd two test cases.\r\n\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 18 Mar 2020 10:48:09 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "On Wed, 18 Mar 2020 at 08:18, movead.li@highgo.ca <movead.li@highgo.ca>\nwrote:\n\n>\n> Hello thanks for the detailed review,\n>\n> >I think the second argument indicates the bit position, which would be\n> max bytea length * 8. If max bytea length covers whole int32, the second\n> argument >needs to be wider i.e. int64.\n> Yes, it makes sence and followed.\n>\n>\nI think we need a similar change in byteaGetBit() and byteaSetBit() as well.\n\n\n>\n> > Some more comments on the patch\n> > struct pg_encoding\n> > {\n> >- unsigned (*encode_len) (const char *data, unsigned dlen);\n> >+ int64 (*encode_len) (const char *data, unsigned dlen);\n> > unsigned (*decode_len) (const char *data, unsigned dlen);\n> > unsigned (*encode) (const char *data, unsigned dlen, char *res);\n> > unsigned (*decode) (const char *data, unsigned dlen, char *res);\n> > Why not use return type of int64 for rest of the functions here as well?\n> > res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\n> > /* Make this FATAL 'cause we've trodden on memory ... */\n> >- if (res > resultlen)\n> >+ if ((int64)res > resultlen)\n> >\n> >if we change return type of all those functions to int64, we won't need\n> this cast.\n> I change the 'encode' function, it needs an int64 return type, but keep\n> other\n>\n> functions in 'pg_encoding', because I think it of no necessary reason.\n>\n>\nOk, let's leave it for a committer to decide.\n\n\n> >Right now we are using int64 because bytea can be 1GB, but what if we\n> increase\n> >that limit tomorrow, will int64 be sufficient? That may be unlikely in\n> the near\n> >future, but nevertheless a possibility. Should we then define a new\n> datatype\n> >which resolves to int64 right now but really depends upon the bytea\n> length. I\n> >am not suggesting that we have to do it right now, but may be something to\n> >think about.\n> I decide to use int64 because if we really want to increase the limit, it\n> should be\n> the same change with 'text', 'varchar' which have the same limit. So it may\n> need\n> a more general struct. Hence I give up the idea.\n>\n>\nHmm, Let's see what a committer says.\n\nSome more review comments.\n+ int64 res,resultlen;\n\nWe need those on separate lines, possibly.\n\n+ byteNo = (int32)(n / BITS_PER_BYTE);\nDoes it hurt to have byteNo as int64 so as to avoid a cast. Otherwise,\nplease\nadd a comment explaining the reason for the cast. The comment applies at\nother\nplaces where this change appears.\n\n- int len;\n+ int64 len;\nWhy do we need this change?\n int i;\n\n\n>\n>\n> >It might help to add a test where we could pass the second argument\n> something\n> >greater than 1G. But it may be difficult to write such a test case.\n> Add two test cases.\n>\n>\n+\n+select get_bit(\n+ set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 *\n1024 * 1024 + 1, 0)\n+ ,1024 * 1024 * 1024 + 1);\n\nThis bit position is still within int4.\npostgres=# select pg_column_size(1024 * 1024 * 1024 + 1);\n pg_column_size\n----------------\n 4\n(1 row)\n\nYou want something like\npostgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8);\n pg_column_size\n----------------\n 8\n(1 row)\n\n-- \nBest Wishes,\nAshutosh\n\nOn Wed, 18 Mar 2020 at 08:18, movead.li@highgo.ca <movead.li@highgo.ca> wrote:\nHello thanks for the detailed review,>I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.Yes, it makes sence and followed.I think we need a similar change in byteaGetBit() and byteaSetBit() as well. > Some more comments on the patch> struct pg_encoding> {>-\tunsigned\t(*encode_len) (const char *data, unsigned dlen);>+\tint64\t\t(*encode_len) (const char *data, unsigned dlen);>  unsigned\t(*decode_len) (const char *data, unsigned dlen);>  unsigned\t(*encode) (const char *data, unsigned dlen, char *res);>  unsigned\t(*decode) (const char *data, unsigned dlen, char *res);> Why not use return type of int64 for rest of the functions here as well?>  res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));>  /* Make this FATAL 'cause we've trodden on memory ... */>-\tif (res > resultlen)>+\tif ((int64)res > resultlen)>>if we change return type of all those functions to int64, we won't need this cast.I change the 'encode' function, it needs an int64 return type, but keep other functions in 'pg_encoding', because I think it of no necessary reason.Ok, let's leave it for a committer to decide. >Right now we are using int64 because bytea can be 1GB, but what if we increase>that limit tomorrow, will int64 be sufficient? That may be unlikely in the near>future, but nevertheless a possibility. Should we then define a new datatype>which resolves to int64 right now but really depends upon the bytea length. I>am not suggesting that we have to do it right now, but may be something to>think about.I decide to use int64 because if we really want to increase the limit,  it should bethe same change with 'text', 'varchar' which have the same limit. So it may needa more general struct. Hence I give up the idea.Hmm, Let's see what a committer says.Some more review comments.+   int64       res,resultlen;We need those on separate lines, possibly.+   byteNo = (int32)(n / BITS_PER_BYTE);Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise, pleaseadd a comment explaining the reason for the cast. The comment applies at otherplaces where this change appears.-       int         len;+       int64       len;Why do we need this change?        int         i; >It might help to add a test where we could pass the second argument something>greater than 1G. But it may be difficult to write such a test case.Add two test cases. ++select get_bit(+        set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 * 1024 * 1024 + 1, 0)+       ,1024 * 1024 * 1024 + 1);This bit position is still within int4.postgres=# select pg_column_size(1024 * 1024 * 1024 + 1);  pg_column_size----------------              4   (1 row)You want something likepostgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8);  pg_column_size----------------              8   (1 row)-- Best Wishes,Ashutosh", "msg_date": "Thu, 26 Mar 2020 19:01:59 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> writes:\n> On Wed, 18 Mar 2020 at 08:18, movead.li@highgo.ca <movead.li@highgo.ca>\n> wrote:\n>> if we change return type of all those functions to int64, we won't need\n>> this cast.\n>> I change the 'encode' function, it needs an int64 return type, but keep\n>> other\n>> functions in 'pg_encoding', because I think it of no necessary reason.\n\n> Ok, let's leave it for a committer to decide.\n\nIf I'm grasping the purpose of these correctly, wouldn't Size or size_t\nbe a more appropriate type? And I definitely agree with changing all\nof these APIs at once, if they're all dealing with the same kind of\nvalue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 10:08:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "On Thu, 26 Mar 2020 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> writes:\n> > On Wed, 18 Mar 2020 at 08:18, movead.li@highgo.ca <movead.li@highgo.ca>\n> > wrote:\n> >> if we change return type of all those functions to int64, we won't need\n> >> this cast.\n> >> I change the 'encode' function, it needs an int64 return type, but keep\n> >> other\n> >> functions in 'pg_encoding', because I think it of no necessary reason.\n>\n> > Ok, let's leave it for a committer to decide.\n>\n> If I'm grasping the purpose of these correctly, wouldn't Size or size_t\n> be a more appropriate type?\n\n\nAndy had used Size in his earlier patch. But I didn't understand the reason\nbehind it and Andy didn't give any reason. From the patch and the code\naround the changes some kind of int (so int64) looked better. But if\nthere's a valid reason for using Size, I am fine with it too. Do we have a\nSQL datatype corresponding to Size?\n\n-- \nBest Wishes,\nAshutosh\n\nOn Thu, 26 Mar 2020 at 19:39, Tom Lane <tgl@sss.pgh.pa.us> wrote:Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com> writes:\n> On Wed, 18 Mar 2020 at 08:18, movead.li@highgo.ca <movead.li@highgo.ca>\n> wrote:\n>> if we change return type of all those functions to int64, we won't need\n>> this cast.\n>> I change the 'encode' function, it needs an int64 return type, but keep\n>> other\n>> functions in 'pg_encoding', because I think it of no necessary reason.\n\n> Ok, let's leave it for a committer to decide.\n\nIf I'm grasping the purpose of these correctly, wouldn't Size or size_t\nbe a more appropriate type? Andy had used Size in his earlier patch. But I didn't understand the reason behind it and Andy didn't give any reason. From the patch and the code around the changes some kind of int (so int64) looked better. But if there's a valid reason for using Size, I am fine with it too. Do we have a SQL datatype corresponding to Size?-- Best Wishes,Ashutosh", "msg_date": "Thu, 26 Mar 2020 22:40:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\tAshutosh Bapat wrote:\n\n> I think we need a similar change in byteaGetBit() and byteaSetBit()\n> as well.\n\nget_bit() and set_bit() as SQL functions take an int4 as the \"offset\"\nargument representing the bit number, meaning that the maximum value\nthat can be passed is 2^31-1.\nBut the maximum theorical size of a bytea value being 1 gigabyte or\n2^30 bytes, the real maximum bit number in a bytea equals 2^33-1\n(2^33=8*2^30), which doesn't fit into an \"int4\". As a result, the\npart of a bytea beyond the first 256MB is inaccessible to get_bit()\nand set_bit().\n\nSo aside from the integer overflow bug, isn't there the issue that the\n\"offset\" argument of get_bit() and set_bit() should have been an\nint8 in the first place?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 27 Mar 2020 18:58:44 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> So aside from the integer overflow bug, isn't there the issue that the\n> \"offset\" argument of get_bit() and set_bit() should have been an\n> int8 in the first place?\n\nGood point, but a fix for that wouldn't be back-patchable.\n\nIt does suggest that we should just make all the internal logic use int8\nfor these values (as the solution to the overflow issue), and then in\nHEAD only, adjust the function signatures so that int8 can be passed in.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:19:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.\r\nYes, it makes sence and followed.\r\n\r\n>I think we need a similar change in byteaGetBit() and byteaSetBit() as well.\r\nSorry, I think it's my mistake, it is the two functions above should be changed.\r\n\r\n\r\n> Some more comments on the patch\r\n> struct pg_encoding\r\n> {\r\n>- unsigned (*encode_len) (const char *data, unsigned dlen);\r\n>+ int64 (*encode_len) (const char *data, unsigned dlen);\r\n> unsigned (*decode_len) (const char *data, unsigned dlen);\r\n> unsigned (*encode) (const char *data, unsigned dlen, char *res);\r\n> unsigned (*decode) (const char *data, unsigned dlen, char *res);\r\n> Why not use return type of int64 for rest of the functions here as well?\r\n> res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\r\n> /* Make this FATAL 'cause we've trodden on memory ... */\r\n>- if (res > resultlen)\r\n>+ if ((int64)res > resultlen)\r\n>\r\n>if we change return type of all those functions to int64, we won't need this cast.\r\nI change the 'encode' function, it needs an int64 return type, but keep other \r\nfunctions in 'pg_encoding', because I think it of no necessary reason.\r\n\r\n>Ok, let's leave it for a committer to decide. \r\nWell, I change all of them this time, because Tom Lane supports on next mail.\r\n\r\n \r\n>Some more review comments.\r\n>+ int64 res,resultlen;\r\nDone\r\n\r\n>We need those on separate lines, possibly.\r\n>+ byteNo = (int32)(n / BITS_PER_BYTE);\r\n>Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise, please\r\n>add a comment explaining the reason for the cast. The comment applies at other\r\n>places where this change appears.\r\n>- int len;\r\n>+ int64 len;\r\n>Why do we need this change?\r\n> int i;\r\nIt is my mistake as describe above, it should not be 'bitgetbit()/bitsetbit()' to be changed.\r\n\r\n\r\n\r\n>It might help to add a test where we could pass the second argument something\r\n>greater than 1G. But it may be difficult to write such a test case.\r\nAdd two test cases.\r\n \r\n>+\r\n>+select get_bit(\r\n>+ set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 * 1024 * 1024 + 1, 0)\r\n>+ ,1024 * 1024 * 1024 + 1);\r\n\r\n>This bit position is still within int4.\r\n>postgres=# select pg_column_size(1024 * 1024 * 1024 + 1); \r\n> pg_column_size\r\n>----------------\r\n> 4 \r\n>(1 row)\r\n\r\n>You want something like\r\n>postgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8); \r\n> pg_column_size\r\n>----------------\r\n> 8 \r\n>(1 row)\r\nI intend to test size large then 1G, and now I think you give a better idea and followed.\r\n\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Sat, 28 Mar 2020 16:59:01 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "I want to resent the mail, because last one is in wrong format and hardly to read.\r\n\r\nIn addition, I think 'Size' or 'size_t' is rely on platform, they may can't work on 32bit\r\nsystem. So I choose 'int64' after ashutosh's review.\r\n\r\n>>I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.\r\n>Yes, it makes sence and followed.\r\n\r\n>>I think we need a similar change in byteaGetBit() and byteaSetBit() as well.\r\nSorry, I think it's my mistake, it is the two functions above should be changed.\r\n\r\n\r\n>>Some more comments on the patch\r\n>> struct pg_encoding\r\n>> {\r\n>>- unsigned (*encode_len) (const char *data, unsigned dlen);\r\n>>+ int64 (*encode_len) (const char *data, unsigned dlen);\r\n>> unsigned (*decode_len) (const char *data, unsigned dlen);\r\n>> unsigned (*encode) (const char *data, unsigned dlen, char *res);\r\n>> unsigned (*decode) (const char *data, unsigned dlen, char *res);\r\n>> Why not use return type of int64 for rest of the functions here as well?\r\n>> res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\r\n>> /* Make this FATAL 'cause we've trodden on memory ... */\r\n>>- if (res > resultlen)\r\n>>+ if ((int64)res > resultlen)\r\n>>\r\n>>if we change return type of all those functions to int64, we won't need this cast.\r\n>I change the 'encode' function, it needs an int64 return type, but keep other \r\n>functions in 'pg_encoding', because I think it of no necessary reason.\r\n\r\n>>Ok, let's leave it for a committer to decide. \r\nWell, I change all of them this time, because Tom Lane supports on next mail.\r\n\r\n \r\n>Some more review comments.\r\n>+ int64 res,resultlen;\r\n>We need those on separate lines, possibly.\r\nDone\r\n\r\n>+ byteNo = (int32)(n / BITS_PER_BYTE);\r\n>Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise, please\r\n>add a comment explaining the reason for the cast. The comment applies at other\r\n>places where this change appears.\r\n>- int len;\r\n>+ int64 len;\r\n>Why do we need this change?\r\n> int i;\r\nIt is my mistake as describe above, it should not be 'bitgetbit()/bitsetbit()' to be changed.\r\n\r\n\r\n\r\n>>It might help to add a test where we could pass the second argument something\r\n>>greater than 1G. But it may be difficult to write such a test case.\r\n>Add two test cases.\r\n \r\n>+\r\n>+select get_bit(\r\n>+ set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 * 1024 * 1024 + 1, 0)\r\n>+ ,1024 * 1024 * 1024 + 1);\r\n\r\n>This bit position is still within int4.\r\n>postgres=# select pg_column_size(1024 * 1024 * 1024 + 1); \r\n> pg_column_size\r\n>----------------\r\n> 4 \r\n>(1 row)\r\n\r\n>You want something like\r\n>postgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8); \r\n> pg_column_size\r\n>----------------\r\n> 8 \r\n>(1 row)\r\nI intend to test size large then 1G, and now I think you give a better idea and followed.\r\n\r\n\r\n\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\r\n\n\nI want to resent the mail, because last one is in wrong format and hardly to read.In addition, I think 'Size' or 'size_t' is rely on platform, they may can't work on 32bitsystem. So I choose 'int64' after ashutosh's review.>>I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.>Yes, it makes sence and followed.>>I think we need a similar change in byteaGetBit() and byteaSetBit() as well.Sorry, I think it's my mistake, it is the two functions above should be changed.>>Some more comments on the patch>> struct pg_encoding>> {>>-\tunsigned\t(*encode_len) (const char *data, unsigned dlen);>>+\tint64\t(*encode_len) (const char *data, unsigned dlen);>>  unsigned\t(*decode_len) (const char *data, unsigned dlen);>>  unsigned\t(*encode) (const char *data, unsigned dlen, char *res);>>  unsigned\t(*decode) (const char *data, unsigned dlen, char *res);>> Why not use return type of int64 for rest of the functions here as well?>>  res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));>>  /* Make this FATAL 'cause we've trodden on memory ... */>>-\tif (res > resultlen)>>+\tif ((int64)res > resultlen)>>>>if we change return type of all those functions to int64, we won't need this cast.>I change the 'encode' function, it needs an int64 return type, but keep other >functions in 'pg_encoding', because I think it of no necessary reason.>>Ok, let's leave it for a committer to decide. Well, I change all of them this time, because Tom Lane supports on next mail. >Some more review comments.>+   int64       res,resultlen;>We need those on separate lines, possibly.Done>+   byteNo = (int32)(n / BITS_PER_BYTE);>Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise, please>add a comment explaining the reason for the cast. The comment applies at other>places where this change appears.>-       int         len;>+       int64       len;>Why do we need this change?>        int         i;It is my mistake as describe above, it should not be 'bitgetbit()/bitsetbit()'  to be changed.>>It might help to add a test where we could pass the second argument something>>greater than 1G. But it may be difficult to write such a test case.>Add two test cases. >+>+select get_bit(>+        set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 * 1024 * 1024 + 1, 0)>+       ,1024 * 1024 * 1024 + 1);>This bit position is still within int4.>postgres=# select pg_column_size(1024 * 1024 * 1024 + 1); > pg_column_size>---------------->              4   >(1 row)>You want something like>postgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8); > pg_column_size>---------------->              8   >(1 row)I intend to test size large then 1G, and now I think you give a better idea and followed.Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Sat, 28 Mar 2020 17:09:37 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Thanks for the changes,\n+ int64 res,resultlen;\n\nIt's better to have them on separate lines.\n\n-unsigned\n+int64\n hex_decode(const char *src, unsigned len, char *dst)\n\nDo we want to explicitly cast the return value to int64? Will build on some\nplatform crib if not done so? I don't know of such a platform but my\nknowledge in this area is not great.\n\n+ byteNo = (int)(n / 8);\n+ bitNo = (int)(n % 8);\nsome comment explaining why this downcasting is safe here?\n\n- proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int4',\n+ proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int8',\n prosrc => 'byteaGetBit' },\n { oid => '724', descr => 'set bit',\n- proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int4\nint4',\n+ proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int8\nint4',\n prosrc => 'byteaSetBit' },\n\nShouldn't we have similar changes for following entries as well?\n{ oid => '3032', descr => 'get bit',\n proname => 'get_bit', prorettype => 'int4', proargtypes => 'bit int4',\n prosrc => 'bitgetbit' },\n{ oid => '3033', descr => 'set bit',\n proname => 'set_bit', prorettype => 'bit', proargtypes => 'bit int4 int4',\n prosrc => 'bitsetbit' },\n\nThe tests you have added are for bytea variant which ultimately calles\nbyteaGet/SetBit(). But I think we also need tests for bit variants which\nwill ultimately call bitgetbit and bitsetbit functions.\n\nOnce you address these comments, I think the patch is good for a committer.\nSo please mark the commitfest entry as such when you post the next version\nof patch.\n\nOn Sat, 28 Mar 2020 at 14:40, movead.li@highgo.ca <movead.li@highgo.ca>\nwrote:\n\n> I want to resent the mail, because last one is in wrong format and hardly\n>> to read.\n>>\n>> In addition, I think 'Size' or 'size_t' is rely on platform, they may\n>> can't work on 32bit\n>> system. So I choose 'int64' after ashutosh's review.\n>>\n>> >>I think the second argument indicates the bit position, which would be\n>> max bytea length * 8. If max bytea length covers whole int32, the second\n>> argument >needs to be wider i.e. int64.\n>> >Yes, it makes sence and followed.\n>>\n>>\n> >>I think we need a similar change in byteaGetBit() and byteaSetBit() as\n> well.\n> Sorry, I think it's my mistake, it is the two functions above should be\n> changed.\n>\n>\n>> >>Some more comments on the patch\n>> >> struct pg_encoding\n>> >> {\n>> >>- unsigned (*encode_len) (const char *data, unsigned dlen);\n>> >>+ int64 (*encode_len) (const char *data, unsigned dlen);\n>> >> unsigned (*decode_len) (const char *data, unsigned dlen);\n>> >> unsigned (*encode) (const char *data, unsigned dlen, char *res);\n>> >> unsigned (*decode) (const char *data, unsigned dlen, char *res);\n>> >> Why not use return type of int64 for rest of the functions here as\n>> well?\n>> >> res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));\n>> >> /* Make this FATAL 'cause we've trodden on memory ... */\n>> >>- if (res > resultlen)\n>> >>+ if ((int64)res > resultlen)\n>> >>\n>> >>if we change return type of all those functions to int64, we won't\n>> need this cast.\n>> >I change the 'encode' function, it needs an int64 return type, but keep\n>> other\n>>\n>> >functions in 'pg_encoding', because I think it of no necessary reason.\n>>\n>>\n> >>Ok, let's leave it for a committer to decide.\n> Well, I change all of them this time, because Tom Lane supports on next\n> mail.\n>\n>\n> >Some more review comments.\n> >+ int64 res,resultlen;\n> >We need those on separate lines, possibly.\n> Done\n>\n> >+ byteNo = (int32)(n / BITS_PER_BYTE);\n> >Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise,\n> please\n> >add a comment explaining the reason for the cast. The comment applies at\n> other\n> >places where this change appears.\n> >- int len;\n> >+ int64 len;\n> >Why do we need this change?\n> > int i;\n> It is my mistake as describe above, it should not be 'bitgetbit()/\n> bitsetbit()' to be changed.\n>\n>\n>>\n>> >>It might help to add a test where we could pass the second argument\n>> something\n>> >>greater than 1G. But it may be difficult to write such a test case.\n>> >Add two test cases.\n>>\n>>\n> >+\n> >+select get_bit(\n> >+ set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024\n> * 1024 * 1024 + 1, 0)\n> >+ ,1024 * 1024 * 1024 + 1);\n>\n> >This bit position is still within int4.\n> >postgres=# select pg_column_size(1024 * 1024 * 1024 + 1);\n> > pg_column_size\n> >----------------\n> > 4\n> >(1 row)\n>\n> >You want something like\n> >postgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8);\n> > pg_column_size\n> >----------------\n> > 8\n> >(1 row)\n> I intend to test size large then 1G, and now I think you give a better\n> idea and followed.\n>\n>\n>\n> ------------------------------\n> Highgo Software (Canada/China/Pakistan)\n> URL : www.highgo.ca\n> EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n>\n>\n\n-- \nBest Wishes,\nAshutosh\n\nThanks for the changes,+\tint64\t\tres,resultlen;It's better to have them on separate lines.-unsigned+int64 hex_decode(const char *src, unsigned len, char *dst)Do we want to explicitly cast the return value to int64? Will build on some platform crib if not done so? I don't know of such a platform but my knowledge in this area is not great.+\tbyteNo = (int)(n / 8);+\tbitNo = (int)(n % 8);some comment explaining why this downcasting is safe here?-  proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int4',+  proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int8',   prosrc => 'byteaGetBit' }, { oid => '724', descr => 'set bit',-  proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int4 int4',+  proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int8 int4',   prosrc => 'byteaSetBit' },Shouldn't we have similar changes for following entries as well?{ oid => '3032', descr => 'get bit',  proname => 'get_bit', prorettype => 'int4', proargtypes => 'bit int4',  prosrc => 'bitgetbit' },{ oid => '3033', descr => 'set bit',  proname => 'set_bit', prorettype => 'bit', proargtypes => 'bit int4 int4',  prosrc => 'bitsetbit' },The tests you have added are for bytea variant which ultimately calles byteaGet/SetBit(). But I think we also need tests for bit variants which will ultimately call bitgetbit and bitsetbit functions.Once you address these comments, I think the patch is good for a committer. So please mark the commitfest entry as such when you post the next version of patch.On Sat, 28 Mar 2020 at 14:40, movead.li@highgo.ca <movead.li@highgo.ca> wrote:\nI want to resent the mail, because last one is in wrong format and hardly to read.In addition, I think 'Size' or 'size_t' is rely on platform, they may can't work on 32bitsystem. So I choose 'int64' after ashutosh's review.>>I think the second argument indicates the bit position, which would be max bytea length * 8. If max bytea length covers whole int32, the second argument >needs to be wider i.e. int64.>Yes, it makes sence and followed.>>I think we need a similar change in byteaGetBit() and byteaSetBit() as well.Sorry, I think it's my mistake, it is the two functions above should be changed.>>Some more comments on the patch>> struct pg_encoding>> {>>-\tunsigned\t(*encode_len) (const char *data, unsigned dlen);>>+\tint64\t(*encode_len) (const char *data, unsigned dlen);>>  unsigned\t(*decode_len) (const char *data, unsigned dlen);>>  unsigned\t(*encode) (const char *data, unsigned dlen, char *res);>>  unsigned\t(*decode) (const char *data, unsigned dlen, char *res);>> Why not use return type of int64 for rest of the functions here as well?>>  res = enc->encode(VARDATA_ANY(data), datalen, VARDATA(result));>>  /* Make this FATAL 'cause we've trodden on memory ... */>>-\tif (res > resultlen)>>+\tif ((int64)res > resultlen)>>>>if we change return type of all those functions to int64, we won't need this cast.>I change the 'encode' function, it needs an int64 return type, but keep other >functions in 'pg_encoding', because I think it of no necessary reason.>>Ok, let's leave it for a committer to decide. Well, I change all of them this time, because Tom Lane supports on next mail. >Some more review comments.>+   int64       res,resultlen;>We need those on separate lines, possibly.Done>+   byteNo = (int32)(n / BITS_PER_BYTE);>Does it hurt to have byteNo as int64 so as to avoid a cast. Otherwise, please>add a comment explaining the reason for the cast. The comment applies at other>places where this change appears.>-       int         len;>+       int64       len;>Why do we need this change?>        int         i;It is my mistake as describe above, it should not be 'bitgetbit()/bitsetbit()'  to be changed.>>It might help to add a test where we could pass the second argument something>>greater than 1G. But it may be difficult to write such a test case.>Add two test cases. >+>+select get_bit(>+        set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 1024 * 1024 * 1024 + 1, 0)>+       ,1024 * 1024 * 1024 + 1);>This bit position is still within int4.>postgres=# select pg_column_size(1024 * 1024 * 1024 + 1); > pg_column_size>---------------->              4   >(1 row)>You want something like>postgres=# select pg_column_size(512::bigint * 1024 * 1024 * 8); > pg_column_size>---------------->              8   >(1 row)I intend to test size large then 1G, and now I think you give a better idea and followed.Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca\n\n-- Best Wishes,Ashutosh", "msg_date": "Tue, 31 Mar 2020 19:34:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">+ int64 res,resultlen;\r\n>It's better to have them on separate lines.\r\nSorry for that, done.\r\n\r\n>-unsigned\r\n>+int64\r\n> hex_decode(const char *src, unsigned len, char *dst)\r\n>Do we want to explicitly cast the return value to int64? Will build on some platform crib if not done so?\r\n>I don't know of such a platform but my knowledge in this area is not great.\r\nI think current change can make sure nothing wrong.\r\n\r\n>+ byteNo = (int)(n / 8);\r\n>+ bitNo = (int)(n % 8);\r\n>some comment explaining why this downcasting is safe here?\r\nDone\r\n\r\n>- proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int4',\r\n>+ proname => 'get_bit', prorettype => 'int4', proargtypes => 'bytea int8',\r\n> prosrc => 'byteaGetBit' },\r\n> { oid => '724', descr => 'set bit',\r\n>- proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int4 int4',\r\n>+ proname => 'set_bit', prorettype => 'bytea', proargtypes => 'bytea int8 int4',\r\n> prosrc => 'byteaSetBit' },\r\n>Shouldn't we have similar changes for following entries as well?\r\n>{ oid => '3032', descr => 'get bit',\r\n> proname => 'get_bit', prorettype => 'int4', proargtypes => 'bit int4',\r\n> prosrc => 'bitgetbit' },\r\n>{ oid => '3033', descr => 'set bit',\r\n> proname => 'set_bit', prorettype => 'bit', proargtypes => 'bit int4 int4',\r\n > prosrc => 'bitsetbit' },\r\nBecause 'bitsetbit' and 'bitgetbit' do not have to calculate bit size by 'multiply 8',\r\nso I think it seems need not to change it.\r\n\r\n>The tests you have added are for bytea variant which ultimately calles byteaGet/SetBit(). \r\n>But I think we also need tests for bit variants which will ultimately call bitgetbit and bitsetbit functions.\r\nAs above, it need not to touch 'bitgetbit' and 'bitsetbit'.\r\n\r\n>Once you address these comments, I think the patch is good for a committer. \r\n>So please mark the commitfest entry as such when you post the next version of patch.\r\nThanks a lot for the detailed review again, and changed patch attached.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Wed, 1 Apr 2020 11:59:39 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\"movead.li@highgo.ca\" <movead.li@highgo.ca> writes:\n> [ long_bytea_string_bug_fix_ver5.patch ]\n\nI don't think this has really solved the overflow hazards. For example,\nin binary_encode we've got\n\n\tresultlen = enc->encode_len(VARDATA_ANY(data), datalen);\n\tresult = palloc(VARHDRSZ + resultlen);\n\nand all you've done about that is changed resultlen from int to int64.\nOn a 64-bit machine, sure, palloc will be able to detect if the\nresult exceeds what can be allocated --- but on a 32-bit machine\nit'd be possible for the size_t argument to overflow altogether.\n(Or if you want to argue that it can't overflow because no encoder\nexpands the data more than 4X, then we don't need to be making this\nchange at all.)\n\nI don't think there's really any way to do that safely without an\nexplicit check before we call palloc.\n\nI also still find the proposed signatures for the encoding-specific\nfunctions to be just plain weird:\n\n-\tunsigned\t(*encode_len) (const char *data, unsigned dlen);\n-\tunsigned\t(*decode_len) (const char *data, unsigned dlen);\n-\tunsigned\t(*encode) (const char *data, unsigned dlen, char *res);\n-\tunsigned\t(*decode) (const char *data, unsigned dlen, char *res);\n+\tint64\t\t(*encode_len) (const char *data, unsigned dlen);\n+\tint64\t\t(*decode_len) (const char *data, unsigned dlen);\n+\tint64\t\t(*encode) (const char *data, unsigned dlen, char *res);\n+\tint64\t\t(*decode) (const char *data, unsigned dlen, char *res);\n\nWhy did you change the outputs from unsigned to signed? Why didn't\nyou change the dlen inputs? I grant that we know that the input\ncan't exceed 1GB in Postgres' usage, but these signatures are just\nrandomly inconsistent, and you didn't even add a comment explaining\nwhy. Personally I think I'd make them like\n\n\tuint64\t\t(*encode_len) (const char *data, size_t dlen);\n\nwhich makes it explicit that the dlen argument describes the length\nof a chunk of allocated memory, while the result might exceed that.\n\nLastly, there is a component of this that can be back-patched and\na component that can't --- we do not change system catalog contents\nin released branches. Cramming both parts into the same patch\nis forcing the committer to pull them apart, which is kind of\nunfriendly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 12:12:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">I don't think this has really solved the overflow hazards. For example,\r\n>in binary_encode we've got\r\n \r\n>resultlen = enc->encode_len(VARDATA_ANY(data), datalen);\r\n>result = palloc(VARHDRSZ + resultlen);\r\n \r\n>and all you've done about that is changed resultlen from int to int64.\r\n>On a 64-bit machine, sure, palloc will be able to detect if the\r\n>result exceeds what can be allocated --- but on a 32-bit machine\r\n>it'd be possible for the size_t argument to overflow altogether.\r\n>(Or if you want to argue that it can't overflow because no encoder\r\n>expands the data more than 4X, then we don't need to be making this\r\n>change at all.)\r\n \r\n>I don't think there's really any way to do that safely without an\r\n>explicit check before we call palloc.\r\nI am sorry I do not very understand these words, and especially\r\nwhat's the mean by 'size_t'. \r\nHere I change resultlen from int to int64, is because we can get a right\r\nerror report value other than '-1' or another strange number.\r\n\r\n\r\n \r\n>Why did you change the outputs from unsigned to signed? Why didn't\r\n>you change the dlen inputs? I grant that we know that the input\r\n>can't exceed 1GB in Postgres' usage, but these signatures are just\r\n>randomly inconsistent, and you didn't even add a comment explaining\r\n>why. Personally I think I'd make them like\r\n>uint64 (*encode_len) (const char *data, size_t dlen);\r\n>which makes it explicit that the dlen argument describes the length\r\n>of a chunk of allocated memory, while the result might exceed that.\r\nI think it makes sence and followed.\r\n \r\n>Lastly, there is a component of this that can be back-patched and\r\n>a component that can't --- we do not change system catalog contents\r\n>in released branches. Cramming both parts into the same patch\r\n>is forcing the committer to pull them apart, which is kind of\r\n>unfriendly.\r\nSorry about that, attached is the changed patch for PG13, and the one\r\nfor older branches will send sooner.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 2 Apr 2020 14:26:08 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">Sorry about that, attached is the changed patch for PG13, and the one\r\n>for older branches will send sooner.\r\nA little update for the patch, and patches for all stable avilable.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Thu, 2 Apr 2020 15:51:52 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\tmovead.li@highgo.ca wrote:\n\n> A little update for the patch, and patches for all stable avilable. \n\nSome comments about the set_bit/get_bit parts.\nI'm reading long_bytea_string_bug_fix_ver6_pg10.patch, but that\napplies probably to the other files meant for the existing releases\n(I think you could get away with only one patch for backpatching\nand one patch for v13, and committers will sort out how\nto deal with release branches).\n\n byteaSetBit(PG_FUNCTION_ARGS)\n {\n\tbytea\t *res = PG_GETARG_BYTEA_P_COPY(0);\n-\tint32\t\tn = PG_GETARG_INT32(1);\n+\tint64\t\tn = PG_GETARG_INT64(1);\n\tint32\t\tnewBit = PG_GETARG_INT32(2);\n\nThe 2nd argument is 32-bit, not 64. PG_GETARG_INT32(1) must be used.\n\n+\terrmsg(\"index \"INT64_FORMAT\" out of valid range, 0..\"INT64_FORMAT,\n+\t\tn, (int64)len * 8 - 1)));\n\nThe cast to int64 avoids the overflow, but it may also produce a\nresult that does not reflect the actual range, which is limited to\n2^31-1, again because the bit number is a signed 32-bit value.\n\nI believe the formula for the upper limit in the 32-bit case is:\n (len <= PG_INT32_MAX / 8) ? (len*8 - 1) : PG_INT32_MAX;\n\nThese functions could benefit from a comment mentioning that\nthey cannot reach the full extent of a bytea, because of the size limit\non the bit number.\n\n--- a/src/test/regress/expected/bit.out\n+++ b/src/test/regress/expected/bit.out\n@@ -656,6 +656,40 @@ SELECT set_bit(B'0101011000100100', 15, 1);\n \n SELECT set_bit(B'0101011000100100', 16, 1);\t-- fail\n ERROR: bit index 16 out of valid range (0..15)+SELECT get_bit(\n+\tset_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 0, 0)\n+\t,0);\n+ get_bit \n+---------\n+\t0\n+(1 row)\n+\n+SELECT get_bit(\n+\tset_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, 0, 1)\n+\t,0);\n+ get_bit \n+---------\n+\t1\n+(1 row)\n+\n\nThese 2 tests need to allocate big chunks of contiguous memory, so they\nmight fail for lack of memory on tiny machines, and even when not failing,\nthey're pretty slow to run. Are they worth the trouble?\n\n+select get_bit(\n+\t set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea, \n+\t\t 512::bigint * 1024 * 1024 * 8 - 1, 0)\n+\t ,512::bigint * 1024 * 1024 * 8 - 1);\n+ get_bit \n+---------\n+\t0\n+(1 row)\n+\n+select get_bit(\n+\t set_bit((repeat('Postgres', 512 * 1024 * 1024 / 8))::bytea,\n+\t\t 512::bigint * 1024 * 1024 * 8 - 1, 1)\n+\t,512::bigint * 1024 * 1024 * 8 - 1);\n+ get_bit \n+---------\n+\t1\n+(1 row)\n\nThese 2 tests are supposed to fail in existing releases because set_bit()\nand get_bit() don't take a bigint as the 2nd argument.\nAlso, the same comment as above on how much they allocate.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Thu, 02 Apr 2020 18:58:59 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> These 2 tests need to allocate big chunks of contiguous memory, so they\n> might fail for lack of memory on tiny machines, and even when not failing,\n> they're pretty slow to run. Are they worth the trouble?\n\nYeah, I'd noticed those on previous readings of the patch. They'd almost\ncertainly fail on some of our older/smaller buildfarm members, so they're\nnot getting committed, even if they didn't require multiple seconds apiece\nto run (even on a machine with plenty of memory). It's useful to have\nthem for initial testing though.\n\nIt'd be great if there was a way to test get_bit/set_bit on large\nindexes without materializing a couple of multi-hundred-MB objects.\nCan't think of one offhand though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 16:04:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> \"Daniel Verite\" <daniel@manitou-mail.org> writes:\n>> These 2 tests need to allocate big chunks of contiguous memory, so they\n>> might fail for lack of memory on tiny machines, and even when not failing,\n>> they're pretty slow to run. Are they worth the trouble?\n>\n> Yeah, I'd noticed those on previous readings of the patch. They'd almost\n> certainly fail on some of our older/smaller buildfarm members, so they're\n> not getting committed, even if they didn't require multiple seconds apiece\n> to run (even on a machine with plenty of memory). It's useful to have\n> them for initial testing though.\n\nPerl's test suite has a similar issue with tests for handling of huge\nstrings, hashes, arrays, regexes etc. We've taken the approach of\nchecking the environment variable PERL_TEST_MEMORY and skipping tests\nthat need more than that many gigabytes. We currently have tests that\ncheck for values from 1 all the way up to 96 GiB.\n\nThis would be trivial to do in the Postgres TAP tests, but something\nsimilar might feasible in the pg_regress too?\n\n> It'd be great if there was a way to test get_bit/set_bit on large\n> indexes without materializing a couple of multi-hundred-MB objects.\n> Can't think of one offhand though.\n\nFor this usecase it might make sense to express the limit in megabytes,\nand have a policy for how much memory tests can assume without explicit\nopt-in from the developer or buildfarm animal.\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n", "msg_date": "Thu, 02 Apr 2020 22:56:39 +0100", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Yeah, I'd noticed those on previous readings of the patch. They'd almost\n>> certainly fail on some of our older/smaller buildfarm members, so they're\n>> not getting committed, even if they didn't require multiple seconds apiece\n>> to run (even on a machine with plenty of memory). It's useful to have\n>> them for initial testing though.\n\n> Perl's test suite has a similar issue with tests for handling of huge\n> strings, hashes, arrays, regexes etc. We've taken the approach of\n> checking the environment variable PERL_TEST_MEMORY and skipping tests\n> that need more than that many gigabytes. We currently have tests that\n> check for values from 1 all the way up to 96 GiB.\n> This would be trivial to do in the Postgres TAP tests, but something\n> similar might feasible in the pg_regress too?\n\nMeh. The memory is only part of it; the other problem is that multiple\nseconds expended in every future run of the regression tests is a price\nthat's many orders of magnitude higher than the potential value of this\ntest case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 18:21:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">Some comments about the set_bit/get_bit parts.\r\n>I'm reading long_bytea_string_bug_fix_ver6_pg10.patch, but that\r\n>applies probably to the other files meant for the existing releases\r\n>(I think you could get away with only one patch for backpatching\r\n>and one patch for v13, and committers will sort out how\r\n>to deal with release branches).\r\nThanks for teaching me.\r\n\r\n>byteaSetBit(PG_FUNCTION_ARGS)\r\n>{\r\n>bytea *res = PG_GETARG_BYTEA_P_COPY(0);\r\n>- int32 n = PG_GETARG_INT32(1);\r\n>+ int64 n = PG_GETARG_INT64(1);\r\n>int32 newBit = PG_GETARG_INT32(2);\r\n>The 2nd argument is 32-bit, not 64. PG_GETARG_INT32(1) must be used. \r\n>+ errmsg(\"index \"INT64_FORMAT\" out of valid range, 0..\"INT64_FORMAT,\r\n>+ n, (int64)len * 8 - 1)));\r\n>The cast to int64 avoids the overflow, but it may also produce a\r\n>result that does not reflect the actual range, which is limited to\r\n>2^31-1, again because the bit number is a signed 32-bit value. \r\n>I believe the formula for the upper limit in the 32-bit case is:\r\n> (len <= PG_INT32_MAX / 8) ? (len*8 - 1) : PG_INT32_MAX;\r\n\r\n>These functions could benefit from a comment mentioning that\r\n>they cannot reach the full extent of a bytea, because of the size limit\r\n>on the bit number.\r\n\r\nBecause the 2nd argument is describing 'bit' location of every byte in bytea\r\nstring, so an int32 is not enough for that. I think the change is nothing wrong,\r\nor I have not caught your means? \r\n\r\n\r\n>These 2 tests need to allocate big chunks of contiguous memory, so they\r\n>might fail for lack of memory on tiny machines, and even when not failing,\r\n>they're pretty slow to run. Are they worth the trouble?\r\n\r\n\r\n>These 2 tests are supposed to fail in existing releases because set_bit()\r\n>and get_bit() don't take a bigint as the 2nd argument.\r\n>Also, the same comment as above on how much they allocate.\r\nI have deleted the four test cases because it is not worth the memory and time,\r\nand no new test cases added because it needs time to generate lots of data.\r\n\r\nNew patch attached.\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Fri, 3 Apr 2020 10:28:15 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\tmovead.li@highgo.ca wrote:\n\n> >I believe the formula for the upper limit in the 32-bit case is: \n> > (len <= PG_INT32_MAX / 8) ? (len*8 - 1) : PG_INT32_MAX; \n> \n> >These functions could benefit from a comment mentioning that \n> >they cannot reach the full extent of a bytea, because of the size limit \n> >on the bit number. \n> \n> Because the 2nd argument is describing 'bit' location of every byte in bytea\n> string, so an int32 is not enough for that. I think the change is nothing\n> wrong, \n> or I have not caught your means? \n\nIn existing releases, the SQL definitions are set_bit(bytea,int4,int4)\nand get_bit(bytea,int4) and cannot be changed to not break the API.\nSo the patch meant for existing releases has to deal with a too-narrow\nint32 bit number.\n\nInternally in the C functions, you may convert that number to int64\nif you think it's necessary, but you may not use PG_GETARG_INT64\nto pick a 32-bit argument.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:53:33 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": ">In existing releases, the SQL definitions are set_bit(bytea,int4,int4)\r\n>and get_bit(bytea,int4) and cannot be changed to not break the API.\r\n>So the patch meant for existing releases has to deal with a too-narrow\r\n>int32 bit number.\r\n \r\n>Internally in the C functions, you may convert that number to int64\r\n>if you think it's necessary, but you may not use PG_GETARG_INT64\r\n>to pick a 32-bit argument.\r\nThe input parameter of 'set_bit()' function for 'byteaGetBit' has changed\r\nto 'bytea int8 int4', but maybe another 'set_bit()' for 'bitgetbit' need not\r\nchanged. The same with 'get_bit()'.\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\n\n>In existing releases, the SQL definitions are set_bit(bytea,int4,int4)>and get_bit(bytea,int4) and cannot be changed to not break the API.>So the patch meant for existing releases has to deal with a too-narrow>int32 bit number. >Internally in the C functions, you may convert that number to int64>if you think it's necessary, but you may not use PG_GETARG_INT64>to pick a 32-bit argument.The input parameter of 'set_bit()' function for 'byteaGetBit' has changedto  'bytea int8 int4', but maybe another 'set_bit()'  for 'bitgetbit' need notchanged. The same with 'get_bit()'.\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 7 Apr 2020 09:29:51 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "Hello hackers,\r\n\r\nAfter several patch change by hacker's proposal, I think it's ready to\r\ncommit, can we commit it before doing the code freeze for pg-13?\r\n\r\n\r\n\r\n\r\nRegards,\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca\r\n\n\nHello hackers,After several patch change by hacker's proposal, I think it's ready tocommit, can we commit it before doing the code freeze for pg-13?\n\nRegards,Highgo Software (Canada/China/Pakistan) URL : www.highgo.ca EMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Tue, 7 Apr 2020 13:39:37 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "\"movead.li@highgo.ca\" <movead.li@highgo.ca> writes:\n> After several patch change by hacker's proposal, I think it's ready to\n> commit, can we commit it before doing the code freeze for pg-13?\n\nIt would be easier to get this done if you had addressed any of the\nobjections to the patch as given. Integer-overflow handling is still\nmissing, and you still are assuming that it's okay to change catalog\nentries in released branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 11:02:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" }, { "msg_contents": "I wrote:\n> It would be easier to get this done if you had addressed any of the\n> objections to the patch as given. Integer-overflow handling is still\n> missing, and you still are assuming that it's okay to change catalog\n> entries in released branches.\n\nSince we are hard upon the feature freeze deadline, I took it on myself\nto split this apart. As far as I can see, the only part we really want\nto back-patch is the adjustment of the range-limit comparisons in\nbyteaGetBit and byteaSetBit to use int64 arithmetic, so they don't\ngo wacko when the input bytea exceeds 256MB. The other changes are\nnot live bugs because in current usage the estimated result size of\nan encoding or decoding transform couldn't exceed 4 times 1GB.\nHence it won't overflow size_t even on 32-bit machines, thus the\ncheck in palloc() is sufficient to deal with overlength values.\nBut it's worth making those changes going forward, I suppose,\nin case somebody wants to deal with longer strings someday.\n\nThere were some other minor problems too, but I think I fixed\neverything.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Apr 2020 16:39:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A bug when use get_bit() function for a long bytea string" } ]
[ { "msg_contents": "Hi hacker,\n\nWhen reading the grouping sets codes, I find that the additional size of\nthe hash table for hash aggregates is always zero, this seems to be\nincorrect to me, attached a patch to fix it, please help to check.\n\nThanks,\nPengzhou", "msg_date": "Thu, 12 Mar 2020 16:35:15 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": true, "msg_subject": "Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "Hi,\n\n\nOn 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> When reading the grouping sets codes, I find that the additional size of\n> the hash table for hash aggregates is always zero, this seems to be\n> incorrect to me, attached a patch to fix it, please help to check.\n\nIndeed, that's incorrect. Causes the number of buckets for the hashtable\nto be set higher - the size is just used for that. I'm a bit wary of\nchanging this in the stable branches - could cause performance changes?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Mar 2020 12:16:26 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n> On 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> > When reading the grouping sets codes, I find that the additional size of\n> > the hash table for hash aggregates is always zero, this seems to be\n> > incorrect to me, attached a patch to fix it, please help to check.\n> \n> Indeed, that's incorrect. Causes the number of buckets for the hashtable\n> to be set higher - the size is just used for that. I'm a bit wary of\n> changing this in the stable branches - could cause performance changes?\n\nI found that it was working when Andres implemented TupleHashTable, but broke\nat:\n\n| b5635948ab Support hashed aggregation with grouping sets.\n\nSo affects v11 and v12. entrysize isn't used for anything else.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 12 Mar 2020 18:11:16 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": ">>>>> \"Justin\" == Justin Pryzby <pryzby@telsasoft.com> writes:\n\n > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n >> Indeed, that's incorrect. Causes the number of buckets for the\n >> hashtable to be set higher - the size is just used for that. I'm a\n >> bit wary of changing this in the stable branches - could cause\n >> performance changes?\n\nI think (offhand, not tested) that the number of buckets would only be\naffected if the (planner-supplied) numGroups value would cause work_mem\nto be exceeded; the planner doesn't plan a hashagg at all in that case\nunless forced to (grouping by a hashable but not sortable column). Note\nthat for various reasons the planner tends to over-estimate the memory\nrequirement anyway.\n\nOr maybe if work_mem had been reduced between plan time and execution\ntime....\n\nSo this is unlikely to be causing any issue in practice, so backpatching\nmay not be called for.\n\nI'll deal with it in HEAD only, unless someone else has a burning desire\nto take it.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n", "msg_date": "Fri, 13 Mar 2020 00:34:22 +0000", "msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "Hi,\n\nOn 2020-03-13 00:34:22 +0000, Andrew Gierth wrote:\n> >>>>> \"Justin\" == Justin Pryzby <pryzby@telsasoft.com> writes:\n> \n> > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n> >> Indeed, that's incorrect. Causes the number of buckets for the\n> >> hashtable to be set higher - the size is just used for that. I'm a\n> >> bit wary of changing this in the stable branches - could cause\n> >> performance changes?\n> \n> I think (offhand, not tested) that the number of buckets would only be\n> affected if the (planner-supplied) numGroups value would cause work_mem\n> to be exceeded; the planner doesn't plan a hashagg at all in that case\n> unless forced to (grouping by a hashable but not sortable column). Note\n> that for various reasons the planner tends to over-estimate the memory\n> requirement anyway.\n\nThat's a good point.\n\n\n> Or maybe if work_mem had been reduced between plan time and execution\n> time....\n> \n> So this is unlikely to be causing any issue in practice, so backpatching\n> may not be called for.\n\nSounds sane to me.\n\n\n> I'll deal with it in HEAD only, unless someone else has a burning desire\n> to take it.\n\nFeel free.\n\nI wonder if we should just remove the parameter though? I'm not sure\nthere's much point in having it, given it's just callers filling\n->additionalstate. And the nbuckets is passed in externally anyway - so\nthere needs to have been a memory sizing determination previously\nanyway? The other users just specify 0 already.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Mar 2020 17:53:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": ">\n> On 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> > When reading the grouping sets codes, I find that the additional size of\n> > the hash table for hash aggregates is always zero, this seems to be\n> > incorrect to me, attached a patch to fix it, please help to check.\n>\n> Indeed, that's incorrect. Causes the number of buckets for the hashtable\n> to be set higher - the size is just used for that. I'm a bit wary of\n> changing this in the stable branches - could cause performance changes?\n>\n>\n thanks for confirming this.\n\nOn 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> When reading the grouping sets codes, I find that the additional size of\n> the hash table for hash aggregates is always zero, this seems to be\n> incorrect to me, attached a patch to fix it, please help to check.\n\nIndeed, that's incorrect. Causes the number of buckets for the hashtable\nto be set higher - the size is just used for that.  I'm a bit wary of\nchanging this in the stable branches - could cause performance changes?\n thanks for confirming this.", "msg_date": "Sat, 14 Mar 2020 08:50:55 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "On Fri, Mar 13, 2020 at 8:34 AM Andrew Gierth <andrew@tao11.riddles.org.uk>\nwrote:\n\n> >>>>> \"Justin\" == Justin Pryzby <pryzby@telsasoft.com> writes:\n>\n> > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n> >> Indeed, that's incorrect. Causes the number of buckets for the\n> >> hashtable to be set higher - the size is just used for that. I'm a\n> >> bit wary of changing this in the stable branches - could cause\n> >> performance changes?\n>\n> I think (offhand, not tested) that the number of buckets would only be\n> affected if the (planner-supplied) numGroups value would cause work_mem\n> to be exceeded; the planner doesn't plan a hashagg at all in that case\n> unless forced to (grouping by a hashable but not sortable column). Note\n> that for various reasons the planner tends to over-estimate the memory\n> requirement anyway.\n>\n>\nThat makes sense, thanks\n\nOn Fri, Mar 13, 2020 at 8:34 AM Andrew Gierth <andrew@tao11.riddles.org.uk> wrote:>>>>> \"Justin\" == Justin Pryzby <pryzby@telsasoft.com> writes:\n\n > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n >> Indeed, that's incorrect. Causes the number of buckets for the\n >> hashtable to be set higher - the size is just used for that. I'm a\n >> bit wary of changing this in the stable branches - could cause\n >> performance changes?\n\nI think (offhand, not tested) that the number of buckets would only be\naffected if the (planner-supplied) numGroups value would cause work_mem\nto be exceeded; the planner doesn't plan a hashagg at all in that case\nunless forced to (grouping by a hashable but not sortable column). Note\nthat for various reasons the planner tends to over-estimate the memory\nrequirement anyway.\nThat makes sense, thanks", "msg_date": "Sat, 14 Mar 2020 08:51:14 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "Thanks, Andres Freund and Andres Gierth.\n\nTo be related, can I invite you to help to review the parallel grouping sets\npatches? It will be very great to hear some comments from you since you\ncontributed most of the codes for grouping sets.\n\nthe thread is\nhttps://www.postgresql.org/message-id/CAG4reAQ8rFCc%2Bi0oju3VjaW7xSOJAkvLrqa4F-NYZzAG4SW7iQ%40mail.gmail.com\n\nThanks,\nPengzhou\n\nOn Fri, Mar 13, 2020 at 3:16 AM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n>\n> On 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> > When reading the grouping sets codes, I find that the additional size of\n> > the hash table for hash aggregates is always zero, this seems to be\n> > incorrect to me, attached a patch to fix it, please help to check.\n>\n> Indeed, that's incorrect. Causes the number of buckets for the hashtable\n> to be set higher - the size is just used for that. I'm a bit wary of\n> changing this in the stable branches - could cause performance changes?\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThanks, Andres Freund and Andres Gierth.To be related, can I invite you to help to review the parallel grouping setspatches? It will be very great to hear some comments from you since youcontributed most of the codes for grouping sets.the thread is https://www.postgresql.org/message-id/CAG4reAQ8rFCc%2Bi0oju3VjaW7xSOJAkvLrqa4F-NYZzAG4SW7iQ%40mail.gmail.comThanks,PengzhouOn Fri, Mar 13, 2020 at 3:16 AM Andres Freund <andres@anarazel.de> wrote:Hi,\n\n\nOn 2020-03-12 16:35:15 +0800, Pengzhou Tang wrote:\n> When reading the grouping sets codes, I find that the additional size of\n> the hash table for hash aggregates is always zero, this seems to be\n> incorrect to me, attached a patch to fix it, please help to check.\n\nIndeed, that's incorrect. Causes the number of buckets for the hashtable\nto be set higher - the size is just used for that.  I'm a bit wary of\nchanging this in the stable branches - could cause performance changes?\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 14 Mar 2020 11:13:46 +0800", "msg_from": "Pengzhou Tang <ptang@pivotal.io>", "msg_from_op": true, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "On Fri, 2020-03-13 at 00:34 +0000, Andrew Gierth wrote:\n> > > > > > \"Justin\" == Justin Pryzby <pryzby@telsasoft.com> writes:\n> \n> > On Thu, Mar 12, 2020 at 12:16:26PM -0700, Andres Freund wrote:\n> >> Indeed, that's incorrect. Causes the number of buckets for the\n> >> hashtable to be set higher - the size is just used for that. I'm\n> a\n\nIt's also used to set the 'entrysize' field of the TupleHashTable,\nwhich doesn't appear to be used for anything? Maybe we should just\nremove that field... it confused me for a moment as I was looking into\nthis.\n\n> >> bit wary of changing this in the stable branches - could cause\n> >> performance changes?\n> \n> I think (offhand, not tested) that the number of buckets would only\n> be\n> affected if the (planner-supplied) numGroups value would cause\n> work_mem\n> to be exceeded; the planner doesn't plan a hashagg at all in that\n> \n\nNow that we have Disk-based HashAgg, which already tries to choose the\nnumber of buckets with work_mem in mind; and no other caller specifies\nnon-zero additionalsize, why not just get rid of that argument\ncompletely? It can still sanity check against work_mem for the sake of\nother callers. But it doesn't need 'additionalsize' to do so.\n\nOr, we can keep the 'additionalsize' argument but put it to work store\nthe AggStatePerGroupData inline in the hash table. That would allow us\nto remove the 'additional' pointer from TupleHashEntryData, saving 8\nbytes plus the chunk header for every group. That sounds very tempting.\n\nIf we want to get even more clever, we could try to squish\nAggStatePerGroupData into 8 bytes by putting the flags\n(transValueIsNull and noTransValue) into unused bits of the Datum. That\nwould work if the transtype is by-ref (low bits if pointer will be\nunused), or if the type's size is less than 8, or if the particular\naggregate doesn't need either of those booleans. It would get messy,\nbut saving 8 bytes per group is non-trivial.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Sat, 21 Mar 2020 17:45:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "Hi,\n\nOn 2020-03-21 17:45:31 -0700, Jeff Davis wrote:\n> Or, we can keep the 'additionalsize' argument but put it to work store\n> the AggStatePerGroupData inline in the hash table. That would allow us\n> to remove the 'additional' pointer from TupleHashEntryData, saving 8\n> bytes plus the chunk header for every group. That sounds very tempting.\n\nI don't see how? That'd require making the hash bucket addressing deal\nwith variable sizes, which'd be bad for performance reasons. Since there\ncan be a aggstate->numtrans AggStatePerGroupDatas for each hash table\nentry, I don't see how to avoid a variable size?\n\n\n> If we want to get even more clever, we could try to squish\n> AggStatePerGroupData into 8 bytes by putting the flags\n> (transValueIsNull and noTransValue) into unused bits of the Datum.\n> That would work if the transtype is by-ref (low bits if pointer will\n> be unused), or if the type's size is less than 8, or if the particular\n> aggregate doesn't need either of those booleans. It would get messy,\n> but saving 8 bytes per group is non-trivial.\n\nI'm somewhat doubtful it's worth going for those per-type optimizations\n- the wins don't seem large enough, relative to other per-group space\nneeds. Also adds additional instructions to fetching those values...\n\nIf we want to optimize memory usage, I think I'd first go for allocating\nthe group's \"firstTuple\" together with all the AggStatePerGroupDatas.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Mar 2020 18:26:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "On Sat, 2020-03-21 at 18:26 -0700, Andres Freund wrote:\n> I don't see how? That'd require making the hash bucket addressing\n> deal\n> with variable sizes, which'd be bad for performance reasons. Since\n> there\n> can be a aggstate->numtrans AggStatePerGroupDatas for each hash table\n> entry, I don't see how to avoid a variable size?\n\nIt would not vary for a given hash table. Do you mean the compile-time\nspecialization (of simplehash.h) would not work any more?\n\nIf we aren't storing the \"additional\" inline in the hash entry, I don't\nsee any purpose for the argument to BuildTupleHashTableExt(), nor the\npurpose of the \"entrysize\" field of TupleHashTableData.\n\n> If we want to optimize memory usage, I think I'd first go for\n> allocating\n> the group's \"firstTuple\" together with all the AggStatePerGroupDatas.\n\nThat's a good idea.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 13:29:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" }, { "msg_contents": "Hi,\n\nOn 2020-03-23 13:29:02 -0700, Jeff Davis wrote:\n> On Sat, 2020-03-21 at 18:26 -0700, Andres Freund wrote:\n> > I don't see how? That'd require making the hash bucket addressing\n> > deal\n> > with variable sizes, which'd be bad for performance reasons. Since\n> > there\n> > can be a aggstate->numtrans AggStatePerGroupDatas for each hash table\n> > entry, I don't see how to avoid a variable size?\n> \n> It would not vary for a given hash table. Do you mean the compile-time\n> specialization (of simplehash.h) would not work any more?\n\nYes.\n\n\n> If we aren't storing the \"additional\" inline in the hash entry, I don't\n> see any purpose for the argument to BuildTupleHashTableExt(), nor the\n> purpose of the \"entrysize\" field of TupleHashTableData.\n\nYea, that was my conclusion too. It looked like Andrew was going to\ncommit a fix, but that hasn't happened yet.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 14:00:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Additional size of hash table is alway zero for hash aggregates" } ]
[ { "msg_contents": "i want to audit dml changes to my audit table and i dont want to use trigger. i am trying to pass the transition table data to my function. (or another solution) i want to write an extension and my code is like above.\n\n\nif (queryDesc->operation == CMD_INSERT) {\n SPI_connect();\n\n Oid argtypes[1] = { REGTYPEOID }; **//whatever**\n\n SPI_execute_with_args(\"SELECT save_insert($1)\", 1, argtypes, queryDesc->estate->es_tupleTable, NULL, true, 0);\n\n SPI_finish();\n }\n\n\n\n\n\n\n\n\n\ni want to audit dml changes to my audit table and i dont\n want to use trigger. i am trying to pass the transition table data to my function. (or another solution) i want to write an extension and my code is like above.\n\n\n\n\n\n\nif (queryDesc->operation == CMD_INSERT) {\n SPI_connect();\n\n Oid argtypes[1] = { REGTYPEOID }; **//whatever**\n\n SPI_execute_with_args(\"SELECT save_insert($1)\", 1, argtypes, queryDesc->estate->es_tupleTable, NULL, true, 0);\n\n SPI_finish();\n }", "msg_date": "Thu, 12 Mar 2020 10:54:28 +0000", "msg_from": "Onur ALTUN <onuraltun@hotmail.com>", "msg_from_op": true, "msg_subject": "can i pass the transition tables to any function from hooks like\n ExecutorFinish?" } ]
[ { "msg_contents": "Thanks a lot for reviewing and pushing this.\r\n\r\nEka\r\n\r\nOn 3/12/20, 1:23 AM, \"Thomas Munro\" <thomas.munro@gmail.com> wrote:\r\n\r\n CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.\r\n \r\n \r\n \r\n On Wed, Mar 11, 2020 at 7:47 PM Thomas Munro <thomas.munro@gmail.com> wrote:\r\n > On Sat, Feb 22, 2020 at 6:10 AM Palamadai, Eka <ekanatha@amazon.com> wrote:\r\n > > Thanks a lot for the feedback. Please let me know if you have any further comments. Meanwhile, I have also added this patch to \"Commitfest 2020-03\" at https://commitfest.postgresql.org/27/2464.\r\n >\r\n > Thanks for the excellent reproducer for this obscure bug. You said\r\n > the problem exists in 9.6-11, but I'm also able to reproduce it in\r\n > 9.5. That's the oldest supported release, but it probably goes back\r\n > further. I confirmed that this patch fixes the immediate problem.\r\n > I've attached a version of your patch with a commit message, to see if\r\n > anyone has more feedback on this.\r\n \r\n Pushed.\r\n \r\n\r\n", "msg_date": "Thu, 12 Mar 2020 14:55:03 +0000", "msg_from": "\"Palamadai, Eka\" <ekanatha@amazon.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Replica sends an incorrect epoch in its hot standby\n feedback to the Master" } ]
[ { "msg_contents": "Hi,\n\nI've encountered a problem with Postgres on PowerPC machine. Sometimes\nmake check on REL_12_STABLE branch crashes with segmentation fault.\n\nIt seems that problem is in errors.sql when executed \n\nselect infinite_recures(); statement\n\nso stack trace, produced by gdb is too long to post here.\n\nProblem is rare and doesn't occur on all runs of make check.\nWhen I run make check repeatedly it occurs once a several hundreds runs.\n\nIt seems that problem is architecture-dependent, because I cannot\nreproduce it on x86_64 CPU with more than thousand runs of make check.\n\nMachine is KVM virtual server on POWER8 system with following CPU:\n\n$ lscpu\nArchitecture: ppc64le\nByte Order: Little Endian\nCPU(s): 32\nOn-line CPU(s) list: 0-31\nThread(s) per core: 8\nCore(s) per socket: 4\nSocket(s): 1\nNUMA node(s): 1\nModel: 2.0 (pvr 004d 0200)\nModel name: POWER8 (architected), altivec supported\nHypervisor vendor: KVM\nVirtualization type: para\nL1d cache: 64K\nL1i cache: 32K\nNUMA node0 CPU(s): 0-31\n\nRunning RedHat 7.6.\n\n\n\nI've collected all relevant information i've can think of (including\n210Mb core file, git commit id, configure and backend logs, list of\ninstalled RPMs) and put it into Google Drive\nhttps://drive.google.com/file/d/1Xs7DixBhMPEmViGUt5wAMewB6_xbZirY/view\n\nHope that somebody more experienced with POWER CPUs can suggest\nsomething about this problem.\n\n-- \n\n\n", "msg_date": "Fri, 13 Mar 2020 10:29:13 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "make check crashes on POWER8 machine" }, { "msg_contents": "On Fri, Mar 13, 2020 at 10:29:13AM +0300, Victor Wagner wrote:\n> Hi,\n> \n> I've encountered a problem with Postgres on PowerPC machine. Sometimes\n\nIs it related to\nhttps://www.postgresql.org/message-id/20032.1570808731%40sss.pgh.pa.us\nhttps://bugzilla.kernel.org/show_bug.cgi?id=205183\n\n(My initial report on that thread was unrelated user-error on my part)\n\n> It seems that problem is in errors.sql when executed \n> \n> select infinite_recures(); statement\n> \n> so stack trace, produced by gdb is too long to post here.\n> \n> Problem is rare and doesn't occur on all runs of make check.\n> When I run make check repeatedly it occurs once a several hundreds runs.\n> \n> It seems that problem is architecture-dependent, because I cannot\n> reproduce it on x86_64 CPU with more than thousand runs of make check.\n\nThat's all consistent with the above problem.\n\n> Running RedHat 7.6.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 13 Mar 2020 07:43:59 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: make check crashes on POWER8 machine" }, { "msg_contents": "On Fri, 13 Mar 2020 07:43:59 -0500\nJustin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Fri, Mar 13, 2020 at 10:29:13AM +0300, Victor Wagner wrote:\n> > Hi,\n> > \n> > I've encountered a problem with Postgres on PowerPC machine.\n> > Sometimes \n> \n> Is it related to\n> https://www.postgresql.org/message-id/20032.1570808731%40sss.pgh.pa.us\n> https://bugzilla.kernel.org/show_bug.cgi?id=205183\n\nI don't think so. At least I cannot see any signal handler-related stuff\nin the trace, but see lots of calls to stored procedure executor\ninstead.\n\nAlthough several different stack traces show completely different parts\nof code when signal SIGSEGV arrives, which may point to asynchronous\nnature of the problem.\n\nUnfortunately I've not kept all the cores I've seen.\n\nIt rather looks like that in some rare circumstances Postgres is unable\nto properly determine end of stack condition.\n-- \n\n\n\n", "msg_date": "Fri, 13 Mar 2020 16:16:10 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: make check crashes on POWER8 machine" }, { "msg_contents": "Victor Wagner <vitus@wagner.pp.ru> writes:\n> Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> On Fri, Mar 13, 2020 at 10:29:13AM +0300, Victor Wagner wrote:\n>>> I've encountered a problem with Postgres on PowerPC machine.\n\n>> Is it related to\n>> https://www.postgresql.org/message-id/20032.1570808731%40sss.pgh.pa.us\n>> https://bugzilla.kernel.org/show_bug.cgi?id=205183\n\n> I don't think so. At least I cannot see any signal handler-related stuff\n> in the trace, but see lots of calls to stored procedure executor\n> instead.\n\nRead the whole thread. We fixed the issue with recursion in the\npostmaster (9abb2bfc0); but the intermittent failure in infinite_recurse\nis exactly the same as what we've been seeing for a long time in the\nbuildfarm, and there is zero doubt that it's that kernel bug.\n\nIn the other thread I'd suggested that we could quit running\nerrors.sql in parallel with other tests, but that would slow down\nparallel regression testing for everybody. I'm disinclined to do\nthat now, since the buildfarm problem is intermittent and easily\nrecognized.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Mar 2020 10:56:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make check crashes on POWER8 machine" }, { "msg_contents": "В Fri, 13 Mar 2020 10:56:15 -0400\nTom Lane <tgl@sss.pgh.pa.us> пишет:\n\n> Victor Wagner <vitus@wagner.pp.ru> writes:\n> > Justin Pryzby <pryzby@telsasoft.com> wrote: \n> >> On Fri, Mar 13, 2020 at 10:29:13AM +0300, Victor Wagner wrote: \n> >>> I've encountered a problem with Postgres on PowerPC machine. \n> \n> >> Is it related to\n> >> https://www.postgresql.org/message-id/20032.1570808731%40sss.pgh.pa.us\n> >> https://bugzilla.kernel.org/show_bug.cgi?id=205183 \n> \n> > I don't think so. At least I cannot see any signal handler-related\n> > stuff in the trace, but see lots of calls to stored procedure\n> > executor instead. \n> \n> Read the whole thread. We fixed the issue with recursion in the\n> postmaster (9abb2bfc0); but the intermittent failure in\n> infinite_recurse is exactly the same as what we've been seeing for a\n> long time in the buildfarm, and there is zero doubt that it's that\n> kernel bug.\n\nI've tried to cherry-pick commit 9abb2bfc8 into REL_12_STABLE and rerun\nmake check in loop. Oops, on 543 run it segfaults with same symptoms\nas before.\n\nHere is link to new core and logs\n\nhttps://drive.google.com/file/d/1oF-0fKHKvFn6FaJ3u-v36p9W0EBAY9nb/view?usp=sharing\n\nI'll try to do this simple test (run make check repeatedly) with \nmaster. There is some time until end of weekend when this machine is\nnon needed by anyone else, so I have time to couple of thousands runs.\n\n\n\n\n\n-- \n Victor Wagner <vitus@wagner.pp.ru>\n\n\n", "msg_date": "Sat, 14 Mar 2020 12:49:28 +0300", "msg_from": "Victor Wagner <vitus@wagner.pp.ru>", "msg_from_op": true, "msg_subject": "Re: make check crashes on POWER8 machine" }, { "msg_contents": "Victor Wagner <vitus@wagner.pp.ru> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> пишет:\n>> Read the whole thread. We fixed the issue with recursion in the\n>> postmaster (9abb2bfc0); but the intermittent failure in\n>> infinite_recurse is exactly the same as what we've been seeing for a\n>> long time in the buildfarm, and there is zero doubt that it's that\n>> kernel bug.\n\n> I've tried to cherry-pick commit 9abb2bfc8 into REL_12_STABLE and rerun\n> make check in loop. Oops, on 543 run it segfaults with same symptoms\n> as before.\n\nUnsurprising, because it's a kernel bug. Maybe you could try\ncherry-picking the patch proposed at kernel.org (see other thread).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Mar 2020 09:19:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make check crashes on POWER8 machine" } ]
[ { "msg_contents": "Unify several ways to tracking backend type\n\nAdd a new global variable MyBackendType that uses the same BackendType\nenum that was previously only used by the stats collector. That way\nseveral duplicate ways of checking what type a particular process is\ncan be simplified. Since it's no longer just for stats, move to\nmiscinit.c and rename existing functions to match the expanded\npurpose.\n\nReviewed-by: Julien Rouhaud <rjuju123@gmail.com>\nReviewed-by: Kuntal Ghosh <kuntalghosh.2007@gmail.com>\nReviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>\nDiscussion: https://www.postgresql.org/message-id/flat/c65e5196-4f04-4ead-9353-6088c19615a3@2ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/8e8a0becb335529c66a9f82f88e1419e49b458ae\n\nModified Files\n--------------\nsrc/backend/bootstrap/bootstrap.c | 48 +++++++----------\nsrc/backend/postmaster/autovacuum.c | 8 +--\nsrc/backend/postmaster/bgworker.c | 2 +-\nsrc/backend/postmaster/pgarch.c | 6 +--\nsrc/backend/postmaster/pgstat.c | 105 ++----------------------------------\nsrc/backend/postmaster/postmaster.c | 7 ++-\nsrc/backend/postmaster/syslogger.c | 3 +-\nsrc/backend/utils/adt/pgstatfuncs.c | 2 +-\nsrc/backend/utils/init/miscinit.c | 55 +++++++++++++++++++\nsrc/backend/utils/misc/ps_status.c | 11 +++-\nsrc/include/miscadmin.h | 22 ++++++++\nsrc/include/pgstat.h | 21 +-------\n12 files changed, 126 insertions(+), 164 deletions(-)", "msg_date": "Fri, 13 Mar 2020 13:28:37 +0000", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "pgsql: Unify several ways to tracking backend type" }, { "msg_contents": "On 2020-Mar-13, Peter Eisentraut wrote:\n\n> Unify several ways to tracking backend type\n> \n> Add a new global variable MyBackendType that uses the same BackendType\n> enum that was previously only used by the stats collector. That way\n> several duplicate ways of checking what type a particular process is\n> can be simplified. Since it's no longer just for stats, move to\n> miscinit.c and rename existing functions to match the expanded\n> purpose.\n\nNow that I look at this again, I realize that these backend-type\ndescriptions are not marked translatable, which is at odds with what we\ndo with HandlChildCrash, for example.\n\nNow, in addition to plastering _() to the strings, maybe we could use\nthat new function in postmaster.c, say\n\n HandleChildCrash(pid, exitstatus,\n GetBackendTypeDesc(B_CHECKPOINTER));\n\nand so on. Same with LogChildExit(). That'd reduce duplication.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 16:22:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Unify several ways to tracking backend type" } ]
[ { "msg_contents": "Hello hackers:\n\n\nDuring crash recovery, we compare most of the lsn of xlog record with page lsn to determine if the record has already been replayed.\nThe exceptions are full-page and init-page xlog records.\nIt's restored if the xlog record includes a full-page image of the page.\nAnd it initializes the page if the xlog record include init page information.\n\n\nWhen we enable checksum for the page and verify page success, can we compare the page lsn with the lsn of full-page xlog record or init page xlog record to detemine it has already been replayed?\n\n\nBRS\nRay\nHello hackers:During crash recovery, we compare most of the lsn of xlog record with page lsn to determine if the record has already been replayed.The exceptions are full-page and init-page xlog records.It's restored if the xlog record includes a full-page image of the page.And it initializes the page if the xlog record include init page information.When we enable checksum for the page and verify page success, can we compare the page lsn with the lsn of full-page xlog record or init page xlog record to detemine it  has already been replayed?BRSRay", "msg_date": "Fri, 13 Mar 2020 23:00:55 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": true, "msg_subject": "Optimize crash recovery" }, { "msg_contents": "On 2020-Mar-13, Thunder wrote:\n\n> Hello hackers:\n> \n> \n> During crash recovery, we compare most of the lsn of xlog record with page lsn to determine if the record has already been replayed.\n> The exceptions are full-page and init-page xlog records.\n> It's restored if the xlog record includes a full-page image of the page.\n> And it initializes the page if the xlog record include init page information.\n> \n> \n> When we enable checksum for the page and verify page success, can we\n> compare the page lsn with the lsn of full-page xlog record or init\n> page xlog record to detemine it has already been replayed?\n\nIn order to verify that the checksum passes, you have to read the page\nfirst. So what are you optimizing?\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 12:41:03 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Optimize crash recovery" }, { "msg_contents": "For example, if page lsn in storage is 0x90000 and start to replay from 0x10000.\nIf 0x10000 is full-page xlog record, then we can ignore to replay xlog between 0x10000~0x90000 for this page.\n\n\nIs there any correct issue if the page exists in the buffer pool and ignore to replay for full-page or init page if page lsn is larger than the lsn of xlog record?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2020-03-13 23:41:03, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\n>On 2020-Mar-13, Thunder wrote:\n>\n>> Hello hackers:\n>> \n>> \n>> During crash recovery, we compare most of the lsn of xlog record with page lsn to determine if the record has already been replayed.\n>> The exceptions are full-page and init-page xlog records.\n>> It's restored if the xlog record includes a full-page image of the page.\n>> And it initializes the page if the xlog record include init page information.\n>> \n>> \n>> When we enable checksum for the page and verify page success, can we\n>> compare the page lsn with the lsn of full-page xlog record or init\n>> page xlog record to detemine it has already been replayed?\n>\n>In order to verify that the checksum passes, you have to read the page\n>first. So what are you optimizing?\n>\n>-- \n>Álvaro Herrera https://www.2ndQuadrant.com/\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nFor example, if page lsn in storage is 0x90000 and start to replay from 0x10000.If 0x10000 is full-page xlog record, then we can ignore to replay xlog between 0x10000~0x90000 for this page.Is there any correct issue if the page exists in the buffer pool and ignore to replay for full-page or init page if page lsn is larger than the lsn of xlog record?At 2020-03-13 23:41:03, \"Alvaro Herrera\" <alvherre@2ndquadrant.com> wrote:\n>On 2020-Mar-13, Thunder wrote:\n>\n>> Hello hackers:\n>> \n>> \n>> During crash recovery, we compare most of the lsn of xlog record with page lsn to determine if the record has already been replayed.\n>> The exceptions are full-page and init-page xlog records.\n>> It's restored if the xlog record includes a full-page image of the page.\n>> And it initializes the page if the xlog record include init page information.\n>> \n>> \n>> When we enable checksum for the page and verify page success, can we\n>> compare the page lsn with the lsn of full-page xlog record or init\n>> page xlog record to detemine it has already been replayed?\n>\n>In order to verify that the checksum passes, you have to read the page\n>first. So what are you optimizing?\n>\n>-- \n>Álvaro Herrera https://www.2ndQuadrant.com/\n>PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Sat, 14 Mar 2020 00:22:16 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re: Optimize crash recovery" }, { "msg_contents": "On 2020-Mar-14, Thunder wrote:\n\n> For example, if page lsn in storage is 0x90000 and start to replay from 0x10000.\n> If 0x10000 is full-page xlog record, then we can ignore to replay xlog between 0x10000~0x90000 for this page.\n> \n> \n> Is there any correct issue if the page exists in the buffer pool and\n> ignore to replay for full-page or init page if page lsn is larger than\n> the lsn of xlog record?\n\nOh! right. The assumption, before we had page-level checksums, was that\nthe page at LSN 0x90000 could have been partially written, so the upper\nhalf of the contents would actually be older and thus restoring the FPI\n(and all subsequent WAL changes) was mandatory. But if the page\nchecksum verifies, then there's no need to return the page back to an\nold state only to replay everything to bring it to the new state again.\n\nThis seems a potentially worthwhile optimization ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 13 Mar 2020 13:30:58 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Re: Optimize crash recovery" }, { "msg_contents": "On Sat, Mar 14, 2020 at 5:31 AM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Mar-14, Thunder wrote:\n> > For example, if page lsn in storage is 0x90000 and start to replay from 0x10000.\n> > If 0x10000 is full-page xlog record, then we can ignore to replay xlog between 0x10000~0x90000 for this page.\n> >\n> > Is there any correct issue if the page exists in the buffer pool and\n> > ignore to replay for full-page or init page if page lsn is larger than\n> > the lsn of xlog record?\n>\n> Oh! right. The assumption, before we had page-level checksums, was that\n> the page at LSN 0x90000 could have been partially written, so the upper\n> half of the contents would actually be older and thus restoring the FPI\n> (and all subsequent WAL changes) was mandatory. But if the page\n> checksum verifies, then there's no need to return the page back to an\n> old state only to replay everything to bring it to the new state again.\n>\n> This seems a potentially worthwhile optimization ...\n\nOne problem is that you now have to read the block from disk, which\ncauses an I/O stall if the page is not already in the kernel page\ncache. That could be worse than the cost of replaying all the WAL\nrecords you get to skip with this trick. My WAL prefetching patch[1]\ncould mitigate that problem to some extent, depending on how much\nprefetching your system can do. The current version of the patch has\na GUC wal_prefetch_fpw to control whether it bothers to prefetch pages\nthat we a FPI for, because normally there's no point, but with this\ntrick you'd want to turn that on.\n\n[1] https://commitfest.postgresql.org/27/2410/\n\n\n", "msg_date": "Sat, 14 Mar 2020 08:45:48 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Re: Optimize crash recovery" } ]
[ { "msg_contents": "I have 5 servers in a testing environment that are comprise a data\nwarehousing cluster. They will typically get each get exactly the\nsame query at approximately the same time. Yesterday, around 1pm, 3\nof the five got stuck on the same query. Each of them yields similar\nstack traces. This happens now and then. The server is 9.6.12\n(which is obviously old, but I did not see any changes in relevant\ncode).\n\n(gdb) bt\n#0 0x00007fe856c0b463 in __epoll_wait_nocancel () from /lib64/libc.so.6\n#1 0x00000000006b4416 in WaitEventSetWaitBlock (nevents=1,\noccurred_events=0x7ffc9f2b0f60, cur_timeout=-1, set=0x27cace8) at\nlatch.c:1053\n#2 WaitEventSetWait (set=0x27cace8, timeout=timeout@entry=-1,\noccurred_events=occurred_events@entry=0x7ffc9f2b0f60,\nnevents=nevents@entry=1) at latch.c:1007\n#3 0x00000000005f26dd in secure_write (port=0x27f16a0,\nptr=ptr@entry=0x27f5528, len=len@entry=192) at be-secure.c:255\n#4 0x00000000005fb51b in internal_flush () at pqcomm.c:1410\n#5 0x00000000005fb72a in internal_putbytes (s=0x2a4f245 \"14M04\",\ns@entry=0x2a4f228 \"\", len=70) at pqcomm.c:1356\n#6 0x00000000005fb7f0 in socket_putmessage (msgtype=68 'D',\ns=0x2a4f228 \"\", len=<optimized out>) at pqcomm.c:1553\n#7 0x00000000005fd5d9 in pq_endmessage (buf=buf@entry=0x7ffc9f2b1040)\nat pqformat.c:347\n#8 0x0000000000479a63 in printtup (slot=0x2958fc8, self=0x2b6bca0) at\nprinttup.c:372\n#9 0x00000000005c1cc9 in ExecutePlan (dest=0x2b6bca0,\ndirection=<optimized out>, numberTuples=0, sendTuples=1 '\\001',\noperation=CMD_SELECT,\n use_parallel_mode=<optimized out>, planstate=0x2958cf8,\nestate=0x2958be8) at execMain.c:1606\n#10 standard_ExecutorRun (queryDesc=0x2834998, direction=<optimized\nout>, count=0) at execMain.c:339\n#11 0x00000000006d69a7 in PortalRunSelect\n(portal=portal@entry=0x2894e38, forward=forward@entry=1 '\\001',\ncount=0, count@entry=9223372036854775807,\n dest=dest@entry=0x2b6bca0) at pquery.c:948\n#12 0x00000000006d7dbb in PortalRun (portal=0x2894e38,\ncount=9223372036854775807, isTopLevel=<optimized out>, dest=0x2b6bca0,\naltdest=0x2b6bca0,\n completionTag=0x7ffc9f2b14e0 \"\") at pquery.c:789\n#13 0x00000000006d5a06 in PostgresMain (argc=<optimized out>,\nargv=<optimized out>, dbname=<optimized out>, username=<optimized\nout>) at postgres.c:1109\n#14 0x000000000046fc28 in BackendRun (port=0x27f16a0) at postmaster.c:4342\n#15 BackendStartup (port=0x27f16a0) at postmaster.c:4016\n#16 ServerLoop () at postmaster.c:1721\n#17 0x0000000000678119 in PostmasterMain (argc=argc@entry=3,\nargv=argv@entry=0x27c8c90) at postmaster.c:1329\n#18 0x000000000047088e in main (argc=3, argv=0x27c8c90) at main.c:228\n(gdb) quit\n\nNow, the fact that this happened to multiple servers at time strongly\nsuggest an external (to the database) problem. The system initiating\nthe query, a cross database query over dblink, has been has given up\n(and has been restarted as a precaution) a long time ago, and the\nconnection is dead. secure_write() sets however an infinite timeout\nto the latch, and there are clearly scenarios where epoll waits\nforever for an event that is never going to occur. If/when this\nhappens, the only recourse is to restart the impacted database. The\nquestion is, shouldn't the latch have a looping timeout that checks\nfor interrupts? What would the risks be of jumping directly out of\nthe latch loop?\n\nmerlin\n\n\n", "msg_date": "Fri, 13 Mar 2020 14:08:32 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "database stuck in __epoll_wait_nocancel(). Are infinite timeouts\n safe?" }, { "msg_contents": "Hi, \n\nOn March 13, 2020 12:08:32 PM PDT, Merlin Moncure <mmoncure@gmail.com> wrote:\n>I have 5 servers in a testing environment that are comprise a data\n>warehousing cluster. They will typically get each get exactly the\n>same query at approximately the same time. Yesterday, around 1pm, 3\n>of the five got stuck on the same query. Each of them yields similar\n>stack traces. This happens now and then. The server is 9.6.12\n>(which is obviously old, but I did not see any changes in relevant\n>code).\n>\n>(gdb) bt\n>#0 0x00007fe856c0b463 in __epoll_wait_nocancel () from\n>/lib64/libc.so.6\n>#1 0x00000000006b4416 in WaitEventSetWaitBlock (nevents=1,\n>occurred_events=0x7ffc9f2b0f60, cur_timeout=-1, set=0x27cace8) at\n>latch.c:1053\n>#2 WaitEventSetWait (set=0x27cace8, timeout=timeout@entry=-1,\n>occurred_events=occurred_events@entry=0x7ffc9f2b0f60,\n>nevents=nevents@entry=1) at latch.c:1007\n>#3 0x00000000005f26dd in secure_write (port=0x27f16a0,\n>ptr=ptr@entry=0x27f5528, len=len@entry=192) at be-secure.c:255\n>#4 0x00000000005fb51b in internal_flush () at pqcomm.c:1410\n>#5 0x00000000005fb72a in internal_putbytes (s=0x2a4f245 \"14M04\",\n>s@entry=0x2a4f228 \"\", len=70) at pqcomm.c:1356\n>#6 0x00000000005fb7f0 in socket_putmessage (msgtype=68 'D',\n>s=0x2a4f228 \"\", len=<optimized out>) at pqcomm.c:1553\n>#7 0x00000000005fd5d9 in pq_endmessage (buf=buf@entry=0x7ffc9f2b1040)\n>at pqformat.c:347\n>#8 0x0000000000479a63 in printtup (slot=0x2958fc8, self=0x2b6bca0) at\n>printtup.c:372\n>#9 0x00000000005c1cc9 in ExecutePlan (dest=0x2b6bca0,\n>direction=<optimized out>, numberTuples=0, sendTuples=1 '\\001',\n>operation=CMD_SELECT,\n> use_parallel_mode=<optimized out>, planstate=0x2958cf8,\n>estate=0x2958be8) at execMain.c:1606\n>#10 standard_ExecutorRun (queryDesc=0x2834998, direction=<optimized\n>out>, count=0) at execMain.c:339\n>#11 0x00000000006d69a7 in PortalRunSelect\n>(portal=portal@entry=0x2894e38, forward=forward@entry=1 '\\001',\n>count=0, count@entry=9223372036854775807,\n> dest=dest@entry=0x2b6bca0) at pquery.c:948\n>#12 0x00000000006d7dbb in PortalRun (portal=0x2894e38,\n>count=9223372036854775807, isTopLevel=<optimized out>, dest=0x2b6bca0,\n>altdest=0x2b6bca0,\n> completionTag=0x7ffc9f2b14e0 \"\") at pquery.c:789\n>#13 0x00000000006d5a06 in PostgresMain (argc=<optimized out>,\n>argv=<optimized out>, dbname=<optimized out>, username=<optimized\n>out>) at postgres.c:1109\n>#14 0x000000000046fc28 in BackendRun (port=0x27f16a0) at\n>postmaster.c:4342\n>#15 BackendStartup (port=0x27f16a0) at postmaster.c:4016\n>#16 ServerLoop () at postmaster.c:1721\n>#17 0x0000000000678119 in PostmasterMain (argc=argc@entry=3,\n>argv=argv@entry=0x27c8c90) at postmaster.c:1329\n>#18 0x000000000047088e in main (argc=3, argv=0x27c8c90) at main.c:228\n>(gdb) quit\n>\n>Now, the fact that this happened to multiple servers at time strongly\n>suggest an external (to the database) problem. The system initiating\n>the query, a cross database query over dblink, has been has given up\n>(and has been restarted as a precaution) a long time ago, and the\n>connection is dead. secure_write() sets however an infinite timeout\n>to the latch, and there are clearly scenarios where epoll waits\n>forever for an event that is never going to occur. If/when this\n>happens, the only recourse is to restart the impacted database. The\n>question is, shouldn't the latch have a looping timeout that checks\n>for interrupts? What would the risks be of jumping directly out of\n>the latch loop?\n\nUnless there is a kernel problem latches are interruptible by signals, as the signal handler should do a SetLatch().\n\nThis backtrace just looks like the backend is trying to send data to the client? What makes you think it's stuck?\n\nIf the connection is dead, epoll should return (both because we ask for the relevant events, and because it just always implicitly does do so).\n\nSo it seems likely that either your connection isn't actually dead (e.g. waiting for tcp timeouts), or you have a kennel bug.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Fri, 13 Mar 2020 12:28:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: database stuck in __epoll_wait_nocancel(). Are infinite timeouts\n safe?" }, { "msg_contents": "On Fri, Mar 13, 2020 at 2:28 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On March 13, 2020 12:08:32 PM PDT, Merlin Moncure <mmoncure@gmail.com> wrote:\n> >I have 5 servers in a testing environment that are comprise a data\n> >warehousing cluster. They will typically get each get exactly the\n> >same query at approximately the same time. Yesterday, around 1pm, 3\n> >of the five got stuck on the same query. Each of them yields similar\n> >stack traces. This happens now and then. The server is 9.6.12\n> >(which is obviously old, but I did not see any changes in relevant\n> >code).\n> >\n> >(gdb) bt\n> >#0 0x00007fe856c0b463 in __epoll_wait_nocancel () from\n> >/lib64/libc.so.6\n> >#1 0x00000000006b4416 in WaitEventSetWaitBlock (nevents=1,\n> >occurred_events=0x7ffc9f2b0f60, cur_timeout=-1, set=0x27cace8) at\n> >latch.c:1053\n> >#2 WaitEventSetWait (set=0x27cace8, timeout=timeout@entry=-1,\n> >occurred_events=occurred_events@entry=0x7ffc9f2b0f60,\n> >nevents=nevents@entry=1) at latch.c:1007\n> >#3 0x00000000005f26dd in secure_write (port=0x27f16a0,\n> >ptr=ptr@entry=0x27f5528, len=len@entry=192) at be-secure.c:255\n> >#4 0x00000000005fb51b in internal_flush () at pqcomm.c:1410\n> >#5 0x00000000005fb72a in internal_putbytes (s=0x2a4f245 \"14M04\",\n> >s@entry=0x2a4f228 \"\", len=70) at pqcomm.c:1356\n> >#6 0x00000000005fb7f0 in socket_putmessage (msgtype=68 'D',\n> >s=0x2a4f228 \"\", len=<optimized out>) at pqcomm.c:1553\n> >#7 0x00000000005fd5d9 in pq_endmessage (buf=buf@entry=0x7ffc9f2b1040)\n> >at pqformat.c:347\n> >#8 0x0000000000479a63 in printtup (slot=0x2958fc8, self=0x2b6bca0) at\n> >printtup.c:372\n> >#9 0x00000000005c1cc9 in ExecutePlan (dest=0x2b6bca0,\n> >direction=<optimized out>, numberTuples=0, sendTuples=1 '\\001',\n> >operation=CMD_SELECT,\n> > use_parallel_mode=<optimized out>, planstate=0x2958cf8,\n> >estate=0x2958be8) at execMain.c:1606\n> >#10 standard_ExecutorRun (queryDesc=0x2834998, direction=<optimized\n> >out>, count=0) at execMain.c:339\n> >#11 0x00000000006d69a7 in PortalRunSelect\n> >(portal=portal@entry=0x2894e38, forward=forward@entry=1 '\\001',\n> >count=0, count@entry=9223372036854775807,\n> > dest=dest@entry=0x2b6bca0) at pquery.c:948\n> >#12 0x00000000006d7dbb in PortalRun (portal=0x2894e38,\n> >count=9223372036854775807, isTopLevel=<optimized out>, dest=0x2b6bca0,\n> >altdest=0x2b6bca0,\n> > completionTag=0x7ffc9f2b14e0 \"\") at pquery.c:789\n> >#13 0x00000000006d5a06 in PostgresMain (argc=<optimized out>,\n> >argv=<optimized out>, dbname=<optimized out>, username=<optimized\n> >out>) at postgres.c:1109\n> >#14 0x000000000046fc28 in BackendRun (port=0x27f16a0) at\n> >postmaster.c:4342\n> >#15 BackendStartup (port=0x27f16a0) at postmaster.c:4016\n> >#16 ServerLoop () at postmaster.c:1721\n> >#17 0x0000000000678119 in PostmasterMain (argc=argc@entry=3,\n> >argv=argv@entry=0x27c8c90) at postmaster.c:1329\n> >#18 0x000000000047088e in main (argc=3, argv=0x27c8c90) at main.c:228\n> >(gdb) quit\n> >\n> >Now, the fact that this happened to multiple servers at time strongly\n> >suggest an external (to the database) problem. The system initiating\n> >the query, a cross database query over dblink, has been has given up\n> >(and has been restarted as a precaution) a long time ago, and the\n> >connection is dead. secure_write() sets however an infinite timeout\n> >to the latch, and there are clearly scenarios where epoll waits\n> >forever for an event that is never going to occur. If/when this\n> >happens, the only recourse is to restart the impacted database. The\n> >question is, shouldn't the latch have a looping timeout that checks\n> >for interrupts? What would the risks be of jumping directly out of\n> >the latch loop?\n>\n> Unless there is a kernel problem latches are interruptible by signals, as the signal handler should do a SetLatch().\n>\n> This backtrace just looks like the backend is trying to send data to the client? What makes you think it's stuck?\n\nWell, the client has been gone for > 24 hours. But your right, when\nI send cancel to the backend, here is what happens according to\nstrace:\nepoll_wait(3, 0x2915e08, 1, -1) = -1 EINTR (Interrupted system call)\n--- SIGINT {si_signo=SIGINT, si_code=SI_USER, si_pid=5024, si_uid=26} ---\nwrite(13, \"\\0\", 1) = 1\nrt_sigreturn({mask=[]}) = -1 EINTR (Interrupted system call)\nsendto(11, \"\\0\\0\\0\\00545780\\0\\0\\0\\003508D\\0\\0\\0d\\0\\t\\0\\0\\0\\00615202\"...,\n5640, 0, NULL, 0) = -1 EAGAIN (Resource temporarily unavailable)\nepoll_wait(3, [{EPOLLIN, {u32=43081176, u64=43081176}}], 1, -1) = 1\nread(12, \"\\0\", 16) = 1\nepoll_wait(3,\n\n\n...pg_terminate_backend() however, does properly kill the query.\n\n> If the connection is dead, epoll should return (both because we ask for the relevant events, and because it just always implicitly does do so).\n>\n> So it seems likely that either your connection isn't actually dead (e.g. waiting for tcp timeouts), or you have a kennel bug.\n\nmaybe, I suspect firewall issue. hard to say\n\nmerlin\n\n\n", "msg_date": "Fri, 13 Mar 2020 15:08:11 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": true, "msg_subject": "Re: database stuck in __epoll_wait_nocancel(). Are infinite timeouts\n safe?" } ]
[ { "msg_contents": "parse_coerce.c's resolve_generic_type() is written on the assumption\nthat it might be asked to infer any polymorphic type's concrete type\ngiven any other polymorphic type's concrete type. This is overkill,\nbecause the only call sites look like\n\n anyelement_type = resolve_generic_type(ANYELEMENTOID,\n anyarray_type,\n ANYARRAYOID);\n\n subtype = resolve_generic_type(ANYELEMENTOID,\n anyrange_type,\n ANYRANGEOID);\n\n anyarray_type = resolve_generic_type(ANYARRAYOID,\n anyelement_type,\n ANYELEMENTOID);\n\n(There are two occurrences of each of these in funcapi.c,\nand nothing anywhere else.)\n\nBut that's a good thing, because resolve_generic_type() gets some\nof the un-exercised cases wrong. Notably, it appears to believe\nthat anyrange is the same type variable as anyelement: if asked\nto resolve anyarray from anyrange, it will produce the array over\nthe concrete range type, where it should produce the array over\nthe range's subtype.\n\nRather than fix this up, I'm inclined to just nuke resolve_generic_type()\naltogether, and replace the call sites with direct uses of the underlying\nlookups such as get_array_type(). I think this is simpler to understand\nas well as being significantly less code.\n\nI also noticed that there's an asymmetry in funcapi.c's usage pattern:\nit can resolve anyelement from either anyarray or anyrange, but it\ncan only resolve anyarray from anyelement not anyrange. This results\nin warts like so:\n\nregression=# create function foo(x anyrange) returns anyarray language sql\nas 'select array[lower(x),upper(x)]';\nCREATE FUNCTION\nregression=# select foo(int4range(6,9));\n foo \n-------\n {6,9}\n(1 row)\n\n-- so far so good, but let's try it with OUT parameters:\n\nregression=# create function foo2(x anyrange, out lu anyarray, out ul anyarray)\nlanguage sql\nas 'select array[lower(x),upper(x)], array[upper(x),lower(x)]';\nCREATE FUNCTION\nregression=# select * from foo2(int4range(6,9));\nERROR: cache lookup failed for type 0\n\nSo that's a bug that needs fixing in any case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Mar 2020 12:08:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "resolve_generic_type() is over-complex and under-correct" } ]
[ { "msg_contents": "See also:\nhttps://commitfest.postgresql.org/27/2390/\nhttps://www.postgresql.org/message-id/flat/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com\nb025f32e0b Add leader_pid to pg_stat_activity\n\n-- \nJustin", "msg_date": "Sun, 15 Mar 2020 06:18:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Sun, Mar 15, 2020 at 06:18:31AM -0500, Justin Pryzby wrote:\n> See also:\n> https://commitfest.postgresql.org/27/2390/\n> https://www.postgresql.org/message-id/flat/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com\n> b025f32e0b Add leader_pid to pg_stat_activity\n\n\nFTR this is a followup of https://www.postgresql.org/message-id/20200315095728.GA26184%40telsasoft.com\n\n+1 for the feature. Regarding the patch:\n\n\n+ case 'k':\n+ if (MyBackendType != B_BG_WORKER)\n+ ; /* Do nothing */\n\n\nIsn't the test inverted? Also a bgworker could run parallel queries through\nSPI I think, should we really ignore bgworkers?\n\n+ else if (!MyProc->lockGroupLeader)\n+ ; /* Do nothing */\n\n\nThere should be a test that MyProc isn't NULL.\n\n+ else if (padding != 0)\n+ appendStringInfo(buf, \"%*d\", padding, MyProc->lockGroupLeader->pid);\n+ else\n+ appendStringInfo(buf, \"%d\", MyProc->lockGroupLeader->pid);\n+ break;\n\nI think that if padding was asked we should append spaces rather than doing\nnothing.\n\n\n", "msg_date": "Sun, 15 Mar 2020 12:49:33 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Sun, Mar 15, 2020 at 12:49:33PM +0100, Julien Rouhaud wrote:\n> On Sun, Mar 15, 2020 at 06:18:31AM -0500, Justin Pryzby wrote:\n> > See also:\n> > https://commitfest.postgresql.org/27/2390/\n> > https://www.postgresql.org/message-id/flat/CAOBaU_Yy5bt0vTPZ2_LUM6cUcGeqmYNoJ8-Rgto+c2+w3defYA@mail.gmail.com\n> > b025f32e0b Add leader_pid to pg_stat_activity\n> \n> FTR this is a followup of https://www.postgresql.org/message-id/20200315095728.GA26184%40telsasoft.com\n\nYes - but I wasn't going to draw attention to the first patch, in which I did\nsomething needlessly complicated and indirect. :)\n\n> + case 'k':\n> + if (MyBackendType != B_BG_WORKER)\n> + ; /* Do nothing */\n> \n> \n> Isn't the test inverted? Also a bgworker could run parallel queries through\n> SPI I think, should we really ignore bgworkers?\n\nI don't think it's reversed, but I think I see your point: the patch is\nsupposed to be showing the leader's own PID for the leader itself. So I think\nthat can just be removed.\n\n> + else if (!MyProc->lockGroupLeader)\n> + ; /* Do nothing */\n> \n> There should be a test that MyProc isn't NULL.\n\nYes, done.\n\n> + else if (padding != 0)\n> + appendStringInfo(buf, \"%*d\", padding, MyProc->lockGroupLeader->pid);\n> + else\n> + appendStringInfo(buf, \"%d\", MyProc->lockGroupLeader->pid);\n> + break;\n> \n> I think that if padding was asked we should append spaces rather than doing\n> nothing.\n\nDone\n\nIt logs like:\n\ntemplate1=# SET log_temp_files=0; explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n2020-03-15 21:20:47.288 CDT [5537 5537]LOG: statement: SET log_temp_files=0;\nSET\n2020-03-15 21:20:47.289 CDT [5537 5537]LOG: statement: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n2020-03-15 21:20:51.253 CDT [5627 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5627.0\", size 6094848\n2020-03-15 21:20:51.253 CDT [5627 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n2020-03-15 21:20:51.254 CDT [5626 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5626.0\", size 6103040\n2020-03-15 21:20:51.254 CDT [5626 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n2020-03-15 21:20:51.263 CDT [5537 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5537.1.sharedfileset/o15of16.p0.0\", size 557056\n\nNow, with the leader showing its own PID.\n\nThis also fixes unsafe access to lockGroupLeader->pid, same issue as in the\noriginal v1 patch for b025f32e0b.\n\n-- \nJustin", "msg_date": "Wed, 18 Mar 2020 16:25:11 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "> On 18 Mar 2020, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> This also fixes unsafe access to lockGroupLeader->pid, same issue as in the\n> original v1 patch for b025f32e0b.\n\nJulian, having been involved in the other threads around this topic, do you\nhave time to review this latest version during the commitfest?\n\ncheers ./daniel\n\n\n", "msg_date": "Thu, 9 Jul 2020 13:48:55 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Thu, Jul 9, 2020 at 1:48 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> > On 18 Mar 2020, at 22:25, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > This also fixes unsafe access to lockGroupLeader->pid, same issue as in the\n> > original v1 patch for b025f32e0b.\n>\n> Julian, having been involved in the other threads around this topic, do you\n> have time to review this latest version during the commitfest?\n\nSure! I've been quite busy with internal work duties recently but\nI'll review this patch shortly. Thanks for the reminder!\n\n\n", "msg_date": "Thu, 9 Jul 2020 13:53:39 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Thu, Jul 09, 2020 at 01:53:39PM +0200, Julien Rouhaud wrote:\n> Sure! I've been quite busy with internal work duties recently but\n> I'll review this patch shortly. Thanks for the reminder!\n\nHmm. In which cases would it be useful to have this information in\nthe logs knowing that pg_stat_activity lets us know the link between\nboth the leader and its workers?\n--\nMichael", "msg_date": "Fri, 10 Jul 2020 11:09:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 10, 2020 at 11:09:40AM +0900, Michael Paquier wrote:\n> On Thu, Jul 09, 2020 at 01:53:39PM +0200, Julien Rouhaud wrote:\n> > Sure! I've been quite busy with internal work duties recently but\n> > I'll review this patch shortly. Thanks for the reminder!\n> \n> Hmm. In which cases would it be useful to have this information in\n> the logs knowing that pg_stat_activity lets us know the link between\n> both the leader and its workers?\n\nPSA is an instantaneous view whereas the logs are a record. That's important\nfor shortlived processes (like background workers) or in the case of an ERROR\nor later crash.\n\nRight now, the logs fail to include that information, which is deficient. Half\nthe utility is in showing *that* the log is for a parallel worker, which is\notherwise not apparent.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Jul 2020 21:20:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Thu, Jul 09, 2020 at 09:20:23PM -0500, Justin Pryzby wrote:\n> On Fri, Jul 10, 2020 at 11:09:40AM +0900, Michael Paquier wrote:\n> > On Thu, Jul 09, 2020 at 01:53:39PM +0200, Julien Rouhaud wrote:\n> > > Sure! I've been quite busy with internal work duties recently but\n> > > I'll review this patch shortly. Thanks for the reminder!\n> > \n> > Hmm. In which cases would it be useful to have this information in\n> > the logs knowing that pg_stat_activity lets us know the link between\n> > both the leader and its workers?\n> \n> PSA is an instantaneous view whereas the logs are a record. That's important\n> for shortlived processes (like background workers) or in the case of an ERROR\n> or later crash.\n> \n> Right now, the logs fail to include that information, which is deficient. Half\n> the utility is in showing *that* the log is for a parallel worker, which is\n> otherwise not apparent.\n\nYes, I agree that this is a nice thing to have and another smell step toward\nparallel query monitoring.\n\nAbout the patch:\n\n+ case 'k':\n+ if (MyProc)\n+ {\n+ PGPROC *leader = MyProc->lockGroupLeader;\n+ if (leader == NULL)\n+ /* padding only */\n+ appendStringInfoSpaces(buf,\n+ padding > 0 ? padding : -padding);\n+ else if (padding != 0)\n+ appendStringInfo(buf, \"%*d\", padding, leader->pid);\n+ else\n+ appendStringInfo(buf, \"%d\", leader->pid);\n+ }\n+ break;\n\nThere's a thinko in the padding handling. It should be dones whether MyProc\nand/or lockGroupLeader is NULL or not, and only if padding was asked, like it's\ndone for case 'd' for instance.\n\nAlso, the '%k' escape sounds a bit random. Is there any reason why we don't\nuse any uppercase character for log_line_prefix? %P could be a better\nalternative, otherwise maybe %g, as GroupLeader/Gather?\n\n\n", "msg_date": "Fri, 10 Jul 2020 17:13:26 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 10, 2020 at 05:13:26PM +0200, Julien Rouhaud wrote:\n> There's a thinko in the padding handling. It should be dones whether MyProc\n> and/or lockGroupLeader is NULL or not, and only if padding was asked, like it's\n> done for case 'd' for instance.\n> \n> Also, the '%k' escape sounds a bit random. Is there any reason why we don't\n> use any uppercase character for log_line_prefix? %P could be a better\n> alternative, otherwise maybe %g, as GroupLeader/Gather?\n\nThanks for looking. %P is a good idea - it's consistent with ps and pkill and\nprobably other %commands. I also amended the docs.\n\n-- \nJustin", "msg_date": "Fri, 10 Jul 2020 11:11:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 10, 2020 at 11:11:15AM -0500, Justin Pryzby wrote:\n> On Fri, Jul 10, 2020 at 05:13:26PM +0200, Julien Rouhaud wrote:\n> > There's a thinko in the padding handling. It should be dones whether MyProc\n> > and/or lockGroupLeader is NULL or not, and only if padding was asked, like it's\n> > done for case 'd' for instance.\n> > \n> > Also, the '%k' escape sounds a bit random. Is there any reason why we don't\n> > use any uppercase character for log_line_prefix? %P could be a better\n> > alternative, otherwise maybe %g, as GroupLeader/Gather?\n> \n> Thanks for looking. %P is a good idea - it's consistent with ps and pkill and\n> probably other %commands. I also amended the docs.\n\nThanks!\n\nSo for the leader == NULL case, the AppendStringInfoSpace is a no-op if no\npadding was asked, so it's probably not worth adding extra code to make it any\nmore obvious.\n\nIt all looks good to me, I'm marking the patch a ready for committer!\n\n\n", "msg_date": "Fri, 10 Jul 2020 18:39:09 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On 2020-Mar-18, Justin Pryzby wrote:\n\n> On Sun, Mar 15, 2020 at 12:49:33PM +0100, Julien Rouhaud wrote:\n\n> template1=# SET log_temp_files=0; explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> 2020-03-15 21:20:47.288 CDT [5537 5537]LOG: statement: SET log_temp_files=0;\n> SET\n> 2020-03-15 21:20:47.289 CDT [5537 5537]LOG: statement: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> 2020-03-15 21:20:51.253 CDT [5627 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5627.0\", size 6094848\n> 2020-03-15 21:20:51.253 CDT [5627 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> 2020-03-15 21:20:51.254 CDT [5626 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5626.0\", size 6103040\n> 2020-03-15 21:20:51.254 CDT [5626 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> 2020-03-15 21:20:51.263 CDT [5537 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5537.1.sharedfileset/o15of16.p0.0\", size 557056\n\nI think it's overly verbose; all non-parallel backends are going to get\ntheir own PID twice, and I'm not sure this is going to be great to\nparse. I think it would be more sensible that if the process does not\nhave a parent (leader), %P expands to empty.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 12:45:29 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 10, 2020 at 12:45:29PM -0400, Alvaro Herrera wrote:\n> On 2020-Mar-18, Justin Pryzby wrote:\n> \n> > On Sun, Mar 15, 2020 at 12:49:33PM +0100, Julien Rouhaud wrote:\n> \n> > template1=# SET log_temp_files=0; explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> > 2020-03-15 21:20:47.288 CDT [5537 5537]LOG: statement: SET log_temp_files=0;\n> > SET\n> > 2020-03-15 21:20:47.289 CDT [5537 5537]LOG: statement: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> > 2020-03-15 21:20:51.253 CDT [5627 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5627.0\", size 6094848\n> > 2020-03-15 21:20:51.253 CDT [5627 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> > 2020-03-15 21:20:51.254 CDT [5626 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5626.0\", size 6103040\n> > 2020-03-15 21:20:51.254 CDT [5626 5537]STATEMENT: explain analyze SELECT a,COUNT(1) FROM t a JOIN t b USING(a) GROUP BY 1;\n> > 2020-03-15 21:20:51.263 CDT [5537 5537]LOG: temporary file: path \"base/pgsql_tmp/pgsql_tmp5537.1.sharedfileset/o15of16.p0.0\", size 557056\n> \n> I think it's overly verbose; all non-parallel backends are going to get\n> their own PID twice, and I'm not sure this is going to be great to\n> parse. I think it would be more sensible that if the process does not\n> have a parent (leader), %P expands to empty.\n\nThat's what's done.\n\n+ <entry>Process ID of the parallel group leader if this process was\n+ at some point involved in parallel query, otherwise null. For a\n+ parallel group leader itself, this field is set to its own process\n+ ID.</entry>\n\n2020-07-10 11:53:32.304 CDT [16699 ]LOG: statement: SELECT 1;\n2020-07-10 11:53:32.304 CDT,\"pryzbyj\",\"postgres\",16699,\"[local]\",5f089d0b.413b,1,\"idle\",2020-07-10 11:53:31 CDT,3/4,0,LOG,00000,\"statement: SELECT 1;\",,,,,,,,,\"psql\",\"client backend\",\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 10 Jul 2020 11:55:25 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On 2020-Jul-10, Justin Pryzby wrote:\n> On Fri, Jul 10, 2020 at 12:45:29PM -0400, Alvaro Herrera wrote:\n\n> > I think it's overly verbose; all non-parallel backends are going to get\n> > their own PID twice, and I'm not sure this is going to be great to\n> > parse. I think it would be more sensible that if the process does not\n> > have a parent (leader), %P expands to empty.\n> \n> That's what's done.\n> \n> + <entry>Process ID of the parallel group leader if this process was\n> + at some point involved in parallel query, otherwise null. For a\n> + parallel group leader itself, this field is set to its own process\n> + ID.</entry>\n\nOh, okay by me then.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 13:16:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 10, 2020 at 01:16:40PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-10, Justin Pryzby wrote:\n>> That's what's done.\n>> \n>> + <entry>Process ID of the parallel group leader if this process was\n>> + at some point involved in parallel query, otherwise null. For a\n>> + parallel group leader itself, this field is set to its own process\n>> + ID.</entry>\n> \n> Oh, okay by me then.\n\nPlease note that this choice comes from BecomeLockGroupLeader(), where\na leader registers itself in lockGroupLeader, and remains set as such\nas long as the process is alive so we would always get a value for a\nprocess once it got involved in parallel query. This patch is just\ndoing what we do in pg_stat_get_activity(), with the padding handling.\nIt is true that this may cause log_line_prefix to be overly verbose in\nthe case where you keep a lot of sessions alive for long time when\nthey got all involved at least once in parallel query as most of them\nwould just refer to their own PID, but I think that it is better to be\nconsistent with what we do already with pg_stat_activity, as that's\nthe data present in the PGPROC entries.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 11:41:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On 2020-Jul-17, Michael Paquier wrote:\n\n> Please note that this choice comes from BecomeLockGroupLeader(), where\n> a leader registers itself in lockGroupLeader, and remains set as such\n> as long as the process is alive so we would always get a value for a\n> process once it got involved in parallel query. This patch is just\n\nOh, ugh, I don't like that part much. If you run connections through a\nconnection pooler, it's going to be everywhere. Let's put it there only\nif the connection *is* running a parallel query, without being too\nstressed about the startup and teardown sequence.\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Jul 2020 22:55:45 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Thu, Jul 16, 2020 at 10:55:45PM -0400, Alvaro Herrera wrote:\n> Oh, ugh, I don't like that part much. If you run connections through a\n> connection pooler, it's going to be everywhere. Let's put it there only\n> if the connection *is* running a parallel query, without being too\n> stressed about the startup and teardown sequence.\n\nHmm. Knowing if a leader is actually running parallel query or not\nrequires a lookup at lockGroupMembers, that itself requires a LWLock.\nI think that it would be better to not require that. So what if\ninstead we logged %P only if Myproc has lockGroupLeader set and it\ndoes *not* match MyProcPid? In short, it means that we would get the\ninformation of a leader for each worker currently running parallel\nquery, but that we would not know from the leader if it is running a\nparallel query or not at the moment of the log. One can then easily\nguess what was happening on the leader by looking at the logs of the\nbackend matching with the PID the workers are logging with %P.\n--\nMichael", "msg_date": "Fri, 17 Jul 2020 14:01:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "Hi,\n\nOn Fri, Jul 17, 2020 at 7:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jul 16, 2020 at 10:55:45PM -0400, Alvaro Herrera wrote:\n> > Oh, ugh, I don't like that part much. If you run connections through a\n> > connection pooler, it's going to be everywhere. Let's put it there only\n> > if the connection *is* running a parallel query, without being too\n> > stressed about the startup and teardown sequence.\n>\n> Hmm. Knowing if a leader is actually running parallel query or not\n> requires a lookup at lockGroupMembers, that itself requires a LWLock.\n> I think that it would be better to not require that. So what if\n> instead we logged %P only if Myproc has lockGroupLeader set and it\n> does *not* match MyProcPid? In short, it means that we would get the\n> information of a leader for each worker currently running parallel\n> query, but that we would not know from the leader if it is running a\n> parallel query or not at the moment of the log. One can then easily\n> guess what was happening on the leader by looking at the logs of the\n> backend matching with the PID the workers are logging with %P.\n\nI had the same concern and was thinking about this approach too.\nAnother argument is that IIUC any log emitted due to\nlog_min_duration_statement wouldn't see the backend as executing a\nparallel query, since the workers would already have been shut down.\n\n\n", "msg_date": "Fri, 17 Jul 2020 07:34:54 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "> On Fri, Jul 17, 2020 at 7:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> > Hmm. Knowing if a leader is actually running parallel query or not\n> > requires a lookup at lockGroupMembers, that itself requires a LWLock.\n> > I think that it would be better to not require that. So what if\n> > instead we logged %P only if Myproc has lockGroupLeader set and it\n> > does *not* match MyProcPid?\n\nThat's what I said first, so +1 for that approach.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Jul 2020 11:35:40 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 17, 2020 at 11:35:40AM -0400, Alvaro Herrera wrote:\n> > On Fri, Jul 17, 2020 at 7:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > > Hmm. Knowing if a leader is actually running parallel query or not\n> > > requires a lookup at lockGroupMembers, that itself requires a LWLock.\n> > > I think that it would be better to not require that. So what if\n> > > instead we logged %P only if Myproc has lockGroupLeader set and it\n> > > does *not* match MyProcPid?\n> \n> That's what I said first, so +1 for that approach.\n\nOk, but should we then consider changing pg_stat_activity for consistency ?\nProbably in v13 to avoid changing it a year later.\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b025f32e0b5d7668daec9bfa957edf3599f4baa8\n\nI think the story is that we're exposing to the user a \"leader pid\" what's\ninternally called (and used as) the \"lock group leader\", which for the leader\nprocess is set to its own PID. But I think what we're exposing as leader_pid\nwill seem like an implementation artifact to users. It's unnatural to define a\nleader PID for the leader itself, and I'm guessing that at least 30% of people\nwho use pg_stat_activity.leader_pid will be surprised by rows with\n| backend_type='client backend' AND leader_pid IS NOT NULL\nAnd maybe additionally confused if PSA doesn't match CSV or other log.\n\nRight now, PSA will include processes \"were leader\" queries like:\n| SELECT pid FROM pg_stat_activity WHERE pid=leader_pid\nIf we change it, I think you can get the same thing for a *current* leader like:\n| SELECT pid FROM pg_stat_activity a WHERE EXISTS (SELECT 1 FROM pg_stat_activity b WHERE b.leader_pid=a.pid);\nBut once the children die, you can't get that anymore. Is that a problem ?\n\nI didn't think of it until now, but it would be useful to query logs for\nprocesses which were involved in parallel process. (It would be more useful if\nit indicated the query, and not just the process)\n\nI agree that showing the PID as the leader PID while using a connection pooler\nis \"noisy\". But I think that's maybe just a consequence of connection pooling.\nAs an analogy, I would normally use a query like:\n| SELECT session_line, message, query FROM postgres_log WHERE session_id='..' ORDER BY 1\nBut that already doesn't work usefully with connection pooling (and I'm not\nsure how to resolve that other than by not using pooling when logs are useful)\n\nI'm not sure what the answer. Probably we should either make both expose\nlockGroupLeader exactly (and not filtered) or make both show lockGroupLeader\nonly if lockGroupLeader!=getpid().\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 17 Jul 2020 15:54:21 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On 2020-Jul-17, Justin Pryzby wrote:\n\n> Ok, but should we then consider changing pg_stat_activity for consistency ?\n> Probably in v13 to avoid changing it a year later.\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b025f32e0b5d7668daec9bfa957edf3599f4baa8\n> \n> I think the story is that we're exposing to the user a \"leader pid\" what's\n> internally called (and used as) the \"lock group leader\", which for the leader\n> process is set to its own PID. But I think what we're exposing as leader_pid\n> will seem like an implementation artifact to users.\n\nIMO it *is* an implementation artifact if, as you say, the leader PID\nremains set after the parallel query is done. I mentioned the pgbouncer\ncase before: if you run a single parallel query, then the process\nremains a \"parallel leader\" for days or weeks afterwards even if it\nhasn't run a parallel query ever since. That doesn't sound great to me.\n\nI think it's understandable and OK if there's a small race condition\nthat means you report a process as a leader shortly before or shortly\nafter a parallel query is actually executed. But doing so until backend\ntermination seems confusing as well as useless.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:27:21 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 17, 2020 at 05:27:21PM -0400, Alvaro Herrera wrote:\n> On 2020-Jul-17, Justin Pryzby wrote:\n> > Ok, but should we then consider changing pg_stat_activity for consistency ?\n> > Probably in v13 to avoid changing it a year later.\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b025f32e0b5d7668daec9bfa957edf3599f4baa8\n> > \n> > I think the story is that we're exposing to the user a \"leader pid\" what's\n> > internally called (and used as) the \"lock group leader\", which for the leader\n> > process is set to its own PID. But I think what we're exposing as leader_pid\n> > will seem like an implementation artifact to users.\n> \n> IMO it *is* an implementation artifact if, as you say, the leader PID\n> remains set after the parallel query is done. I mentioned the pgbouncer\n> case before: if you run a single parallel query, then the process\n> remains a \"parallel leader\" for days or weeks afterwards even if it\n> hasn't run a parallel query ever since. That doesn't sound great to me.\n> \n> I think it's understandable and OK if there's a small race condition\n> that means you report a process as a leader shortly before or shortly\n> after a parallel query is actually executed. But doing so until backend\n> termination seems confusing as well as useless.\n\nI'm not sure that connection pooling is the strongest argument against the\ncurrent behavior, but we could change it as suggested to show as NULL the\nleader_pid for the leader's own process. I think that's the intuitive behavior\na user expects. Parallel processes are those with leader_pid IS NOT NULL. If\nwe ever used lockGroupLeader for something else, you'd also have to say AND\nbackend_type='parallel worker'.\n\nWe should talk about doing that for PSA and for v13 as well. Here or on the\nother thread or a new thread ? It's a simple enough change, but the question\nis if we want to provide a more \"cooked\" view whcih hides the internals, and if\nso, is this really enough.\n\n--- a/src/backend/utils/adt/pgstatfuncs.c\n+++ b/src/backend/utils/adt/pgstatfuncs.c\n@@ -737,3 +737,4 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n leader = proc->lockGroupLeader;\n- if (leader)\n+ if (leader && leader->pid != beentry->st_procpid)\n {\n values[29] = Int32GetDatum(leader->pid);\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 17 Jul 2020 17:32:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 17, 2020 at 05:32:36PM -0500, Justin Pryzby wrote:\n> On Fri, Jul 17, 2020 at 05:27:21PM -0400, Alvaro Herrera wrote:\n> > On 2020-Jul-17, Justin Pryzby wrote:\n> > > Ok, but should we then consider changing pg_stat_activity for consistency ?\n> > > Probably in v13 to avoid changing it a year later.\n> > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=b025f32e0b5d7668daec9bfa957edf3599f4baa8\n> > > \n> > > I think the story is that we're exposing to the user a \"leader pid\" what's\n> > > internally called (and used as) the \"lock group leader\", which for the leader\n> > > process is set to its own PID. But I think what we're exposing as leader_pid\n> > > will seem like an implementation artifact to users.\n> > \n> > IMO it *is* an implementation artifact if, as you say, the leader PID\n> > remains set after the parallel query is done. I mentioned the pgbouncer\n> > case before: if you run a single parallel query, then the process\n> > remains a \"parallel leader\" for days or weeks afterwards even if it\n> > hasn't run a parallel query ever since. That doesn't sound great to me.\n> > \n> > I think it's understandable and OK if there's a small race condition\n> > that means you report a process as a leader shortly before or shortly\n> > after a parallel query is actually executed. But doing so until backend\n> > termination seems confusing as well as useless.\n> \n> I'm not sure that connection pooling is the strongest argument against the\n> current behavior, but we could change it as suggested to show as NULL the\n> leader_pid for the leader's own process. I think that's the intuitive behavior\n> a user expects. Parallel processes are those with leader_pid IS NOT NULL. If\n> we ever used lockGroupLeader for something else, you'd also have to say AND\n> backend_type='parallel worker'.\n> \n> We should talk about doing that for PSA and for v13 as well. Here or on the\n> other thread or a new thread ? It's a simple enough change, but the question\n> is if we want to provide a more \"cooked\" view whcih hides the internals, and if\n> so, is this really enough.\n> \n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -737,3 +737,4 @@ pg_stat_get_activity(PG_FUNCTION_ARGS)\n> leader = proc->lockGroupLeader;\n> - if (leader)\n> + if (leader && leader->pid != beentry->st_procpid)\n> {\n> values[29] = Int32GetDatum(leader->pid);\n\nThis thread is about a new feature that I proposed which isn't yet committed\n(logging leader_pid). But it raises a question which is immediately relevant\nto pg_stat_activity.leader_pid, which is committed for v13. So feel free to\nmove to a new thread or to the thread for commit b025f3.\n\nI added this to the Opened Items list so it's not lost.\n\nI see a couple options:\n\n- Update the documentation only, saying something like \"leader_pid: the lock\n group leader. For a process involved in parallel query, this is the parallel\n leader. In particular, for the leader process itself, leader_pid = pid, and\n it is not reset until the leader terminates (it does not change when parallel\n workers exit). This leaves in place the \"raw\" view of the data structure,\n which can be desirable, but can be perceived as exposing unfriendly\n implementation details.\n\n- Functional change to show leader_pid = NULL for the leader itself. Maybe\n the columns should only be not-NULL when st_backendType == B_BG_WORKER &&\n bgw_type='parallel worker'. Update documentation to say: \"leader_pid: for\n parallel workers, the PID of their leader process\". (not a raw view of the\n \"lock group leader\").\n\n- ??\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 20 Jul 2020 18:30:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Mon, Jul 20, 2020 at 06:30:48PM -0500, Justin Pryzby wrote:\n> This thread is about a new feature that I proposed which isn't yet committed\n> (logging leader_pid). But it raises a question which is immediately relevant\n> to pg_stat_activity.leader_pid, which is committed for v13. So feel free to\n> move to a new thread or to the thread for commit b025f3.\n\nFor a change of this size, with everybody involved in the past\ndiscussion already on this thread, and knowing that you already\ncreated an open item pointing to this part of the thread, I am not\nsure that I would bother spawning a new thread now :) \n\n> I see a couple options:\n> \n> - Update the documentation only, saying something like \"leader_pid: the lock\n> group leader. For a process involved in parallel query, this is the parallel\n> leader. In particular, for the leader process itself, leader_pid = pid, and\n> it is not reset until the leader terminates (it does not change when parallel\n> workers exit). This leaves in place the \"raw\" view of the data structure,\n> which can be desirable, but can be perceived as exposing unfriendly\n> implementation details.\n> \n> - Functional change to show leader_pid = NULL for the leader itself. Maybe\n> the columns should only be not-NULL when st_backendType == B_BG_WORKER &&\n> bgw_type='parallel worker'. Update documentation to say: \"leader_pid: for\n> parallel workers, the PID of their leader process\". (not a raw view of the\n> \"lock group leader\").\n\nYeah, I don't mind revisiting that per the connection pooler argument.\nAnd I'd rather keep the simple suggestion of upthread to leave the\nfield as NULL for the parallel group leader with a PID match but not a\nbackend type check so as this could be useful for other types of\nprocesses. This leads me to the attached with the docs updated\n(tested with read-only pgbench spawning parallel workers with\npg_stat_activity queried in parallel), to be applied down to 13.\nThoughts are welcome.\n--\nMichael", "msg_date": "Tue, 21 Jul 2020 12:51:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Tue, Jul 21, 2020 at 12:51:45PM +0900, Michael Paquier wrote:\n> And I'd rather keep the simple suggestion of upthread to leave the\n> field as NULL for the parallel group leader with a PID match but not a\n> backend type check so as this could be useful for other types of\n> processes.\n\nThe documentation could talk about either:\n\n1) \"lock group leader\" - low-level, raw view of the internal data structure\n(with a secondary mention that \"for a parallel process, this is its parallel\nleader).\n2) \"parallel leaders\" high-level, user-facing, \"cooked\" view;\n\nRight now it doesn't matter, but it seems that if we document the high-level\n\"parallel leader\", then we don't need to accomodate future uses (at least until\nthe future happens).\n\n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index dc49177c78..15c598b2a5 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -687,12 +687,9 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> <structfield>leader_pid</structfield> <type>integer</type>\n> </para>\n> <para>\n> - Process ID of the parallel group leader if this process is or\n> - has been involved in parallel query, or null. This field is set\n> - when a process wants to cooperate with parallel workers, and\n> - remains set as long as the process exists. For a parallel group leader,\n> - this field is set to its own process ID. For a parallel worker,\n> - this field is set to the process ID of the parallel group leader.\n> + Process ID of the parallel group leader if this process is involved\n> + in parallel query, or null. For a parallel group leader, this field\n> + is <literal>NULL</literal>.\n> </para></entry>\n> </row>\n\nFWIW , I prefer something like my earlier phrase:\n\n| For a parallel worker, this is the Process ID of its leader process. Null\n| for processes which are not parallel workers.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 20 Jul 2020 23:12:31 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Mon, Jul 20, 2020 at 11:12:31PM -0500, Justin Pryzby wrote:\n> On Tue, Jul 21, 2020 at 12:51:45PM +0900, Michael Paquier wrote:\n> The documentation could talk about either:\n> \n> 1) \"lock group leader\" - low-level, raw view of the internal data structure\n> (with a secondary mention that \"for a parallel process, this is its parallel\n> leader).\n> 2) \"parallel leaders\" high-level, user-facing, \"cooked\" view;\n> \n> Right now it doesn't matter, but it seems that if we document the high-level\n> \"parallel leader\", then we don't need to accomodate future uses (at least until\n> the future happens).\n\nHmm. Not sure. This sounds like material for a separate and larger\npatch.\n\n>> <para>\n>> - Process ID of the parallel group leader if this process is or\n>> - has been involved in parallel query, or null. This field is set\n>> - when a process wants to cooperate with parallel workers, and\n>> - remains set as long as the process exists. For a parallel group leader,\n>> - this field is set to its own process ID. For a parallel worker,\n>> - this field is set to the process ID of the parallel group leader.\n>> + Process ID of the parallel group leader if this process is involved\n>> + in parallel query, or null. For a parallel group leader, this field\n>> + is <literal>NULL</literal>.\n>> </para></entry>\n> \n> FWIW , I prefer something like my earlier phrase:\n> \n> | For a parallel worker, this is the Process ID of its leader process. Null\n> | for processes which are not parallel workers.\n\nI preferred mine, and it seems to me that the first sentence of the\nprevious patch covers already both things mentioned in your sentence.\nIt also seems to me that it is an important thing to directly outline\nthat this field remains NULL for group leaders.\n--\nMichael", "msg_date": "Tue, 21 Jul 2020 13:33:25 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 17, 2020 at 07:34:54AM +0200, Julien Rouhaud wrote:\n> I had the same concern and was thinking about this approach too.\n> Another argument is that IIUC any log emitted due to\n> log_min_duration_statement wouldn't see the backend as executing a\n> parallel query, since the workers would already have been shut down.\n\nNot sure that it is worth bothering about this case. You could also\nhave a backend killed by log_min_duration_statement on a query that\ndid not involve parallel query, where you would still report the PID\nif it got involved at least once in parallel query for a query before\nthat.\n--\nMichael", "msg_date": "Tue, 21 Jul 2020 13:38:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Tue, Jul 21, 2020 at 6:33 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jul 20, 2020 at 11:12:31PM -0500, Justin Pryzby wrote:\n> > On Tue, Jul 21, 2020 at 12:51:45PM +0900, Michael Paquier wrote:\n> > The documentation could talk about either:\n> >\n> > 1) \"lock group leader\" - low-level, raw view of the internal data structure\n> > (with a secondary mention that \"for a parallel process, this is its parallel\n> > leader).\n> > 2) \"parallel leaders\" high-level, user-facing, \"cooked\" view;\n> >\n> > Right now it doesn't matter, but it seems that if we document the high-level\n> > \"parallel leader\", then we don't need to accomodate future uses (at least until\n> > the future happens).\n>\n> Hmm. Not sure. This sounds like material for a separate and larger\n> patch.\n>\n> >> <para>\n> >> - Process ID of the parallel group leader if this process is or\n> >> - has been involved in parallel query, or null. This field is set\n> >> - when a process wants to cooperate with parallel workers, and\n> >> - remains set as long as the process exists. For a parallel group leader,\n> >> - this field is set to its own process ID. For a parallel worker,\n> >> - this field is set to the process ID of the parallel group leader.\n> >> + Process ID of the parallel group leader if this process is involved\n> >> + in parallel query, or null. For a parallel group leader, this field\n> >> + is <literal>NULL</literal>.\n> >> </para></entry>\n> >\n> > FWIW , I prefer something like my earlier phrase:\n> >\n> > | For a parallel worker, this is the Process ID of its leader process. Null\n> > | for processes which are not parallel workers.\n>\n> I preferred mine, and it seems to me that the first sentence of the\n> previous patch covers already both things mentioned in your sentence.\n> It also seems to me that it is an important thing to directly outline\n> that this field remains NULL for group leaders.\n\nI agree that Michael's version seems less error prone and makes\neverything crystal clear, so +1 for it.\n\n\n", "msg_date": "Wed, 22 Jul 2020 14:25:29 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On 2020-Jul-21, Michael Paquier wrote:\n\n> On Mon, Jul 20, 2020 at 11:12:31PM -0500, Justin Pryzby wrote:\n\n> >> + Process ID of the parallel group leader if this process is involved\n> >> + in parallel query, or null. For a parallel group leader, this field\n> >> + is <literal>NULL</literal>.\n> >> </para></entry>\n\n> > | For a parallel worker, this is the Process ID of its leader process. Null\n> > | for processes which are not parallel workers.\n\nHow about we combine both. \"Process ID of the parallel group leader, if\nthis process is a parallel query worker. NULL if this process is a\nparallel group leader or does not participate in parallel query\".\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 22 Jul 2020 11:36:05 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Wed, Jul 22, 2020 at 11:36:05AM -0400, Alvaro Herrera wrote:\n> How about we combine both. \"Process ID of the parallel group leader, if\n> this process is a parallel query worker. NULL if this process is a\n> parallel group leader or does not participate in parallel query\".\n\nSounds fine to me. Thanks.\n\nDo others have any objections with this wording?\n--\nMichael", "msg_date": "Thu, 23 Jul 2020 09:52:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Wed, Jul 22, 2020 at 11:36:05AM -0400, Alvaro Herrera wrote:\n>> How about we combine both. \"Process ID of the parallel group leader, if\n>> this process is a parallel query worker. NULL if this process is a\n>> parallel group leader or does not participate in parallel query\".\n\n> Sounds fine to me. Thanks.\n> Do others have any objections with this wording?\n\nIs \"NULL\" really le mot juste here? If we're talking about text strings,\nas the thread title implies (I've not read the patch), then I think you\nshould say \"empty string\", because the SQL concept of null doesn't apply.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 22 Jul 2020 20:59:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Wed, Jul 22, 2020 at 08:59:04PM -0400, Tom Lane wrote:\n> Is \"NULL\" really le mot juste here? If we're talking about text strings,\n> as the thread title implies (I've not read the patch), then I think you\n> should say \"empty string\", because the SQL concept of null doesn't apply.\n\nSorry for the confusion. This part of the thread applies to the open\nitem for v13 related to pg_stat_activity's leader_pid. A different\nthread should have been spawned for this specific topic, but things\nare as they are..\n--\nMichael", "msg_date": "Thu, 23 Jul 2020 10:42:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Thu, Jul 23, 2020 at 09:52:14AM +0900, Michael Paquier wrote:\n> Sounds fine to me. Thanks.\n> \n> Do others have any objections with this wording?\n\nI have used the wording suggested by Alvaro, and applied the patch\ndown to 13. Now let's see about the original item of this thread..\n--\nMichael", "msg_date": "Sun, 26 Jul 2020 16:42:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Sun, Jul 26, 2020 at 04:42:06PM +0900, Michael Paquier wrote:\n> On Thu, Jul 23, 2020 at 09:52:14AM +0900, Michael Paquier wrote:\n> > Sounds fine to me. Thanks.\n> > \n> > Do others have any objections with this wording?\n> \n> I have used the wording suggested by Alvaro, and applied the patch\n> down to 13. Now let's see about the original item of this thread..\n\nUpdated with updated wording to avoid \"null\", per Tom.\n\n-- \nJustin", "msg_date": "Sun, 26 Jul 2020 13:54:27 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Sun, Jul 26, 2020 at 01:54:27PM -0500, Justin Pryzby wrote:\n> + <row>\n> + <entry><literal>%P</literal></entry>\n> + <entry>For a parallel worker, this is the Process ID of its leader\n> + process.\n> + </entry>\n> + <entry>no</entry>\n> + </row>\n\nLet's be a maximum simple and consistent with surrounding descriptions\nhere and what we have for pg_stat_activity, say:\n\"Process ID of the parallel group leader, if this process is a\nparallel query worker.\"\n\n> + case 'P':\n> + if (MyProc)\n> + {\n> + PGPROC *leader = MyProc->lockGroupLeader;\n> + if (leader == NULL || leader->pid == MyProcPid)\n> + /* padding only */\n> + appendStringInfoSpaces(buf,\n> + padding > 0 ? padding : -padding);\n> + else if (padding != 0)\n> + appendStringInfo(buf, \"%*d\", padding, leader->pid);\n> + else\n> + appendStringInfo(buf, \"%d\", leader->pid);\n\nIt seems to me we should document here that the check on MyProcPid\nensures that this only prints the leader PID only for parallel workers\nand discards the leader.\n\n> + appendStringInfoChar(&buf, ',');\n> +\n> + /* leader PID */\n> + if (MyProc)\n> + {\n> + PGPROC *leader = MyProc->lockGroupLeader;\n> + if (leader && leader->pid != MyProcPid)\n> + appendStringInfo(&buf, \"%d\", leader->pid);\n> + }\n> +\n\nSame here.\n\nExcept for those nits, I have tested the patch and things behave as we\nwant (including padding and docs), so this looks good to me.\n--\nMichael", "msg_date": "Tue, 28 Jul 2020 10:10:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Tue, Jul 28, 2020 at 10:10:33AM +0900, Michael Paquier wrote:\n> Except for those nits, I have tested the patch and things behave as we\n> want (including padding and docs), so this looks good to me.\n\nRevised with your suggestions.\n\n-- \nJustin", "msg_date": "Fri, 31 Jul 2020 15:04:48 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 31, 2020 at 03:04:48PM -0500, Justin Pryzby wrote:\n> On Tue, Jul 28, 2020 at 10:10:33AM +0900, Michael Paquier wrote:\n> > Except for those nits, I have tested the patch and things behave as we\n> > want (including padding and docs), so this looks good to me.\n> \n> Revised with your suggestions.\n\nUh, wrong patch. 2nd attempt.\n\nAlso, I was reminded by Tom's c410af098 about this comment:\n\n * Further note: At least on some platforms, passing %*s rather than\n * %s to appendStringInfo() is substantially slower, so many of the\n * cases below avoid doing that unless non-zero padding is in fact\n * specified.\n\nIt seems we can remove that hack and avoid its spiriling conditionals.\nIt's cleaner to make that 0001.\n\n-- \nJustin", "msg_date": "Fri, 31 Jul 2020 22:31:13 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" }, { "msg_contents": "On Fri, Jul 31, 2020 at 10:31:13PM -0500, Justin Pryzby wrote:\n> Also, I was reminded by Tom's c410af098 about this comment:\n> \n> * Further note: At least on some platforms, passing %*s rather than\n> * %s to appendStringInfo() is substantially slower, so many of the\n> * cases below avoid doing that unless non-zero padding is in fact\n> * specified.\n> \n> It seems we can remove that hack and avoid its spiriling conditionals.\n> It's cleaner to make that 0001.\n\nNot sure what 0001 is doing on this thread, so I would suggest to\ncreate a new thread for that to attract the correct audience. It is\ntrue that we should not need that anymore as we use our own\nimplementation of sprintf now.\n\nFor now, I have taken 0002 as a base, fixed a couple of things (doc\ntweaks, removed unnecessary header inclusion, etc.), and committed it,\nmeaning that we are done here.\n--\nMichael", "msg_date": "Mon, 3 Aug 2020 13:41:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: expose parallel leader in CSV and log_line_prefix" } ]
[ { "msg_contents": "Hello,\nit seems that column \"rows\" is not updated after CREATE TABLE AS SELECT\nstatements.\n\npg13devel (snapshot 2020-03-14)\npostgres=# select name,setting from pg_settings where name like 'pg_stat%';\n name | setting\n----------------------------------+---------\n pg_stat_statements.max | 5000\n pg_stat_statements.save | on\n pg_stat_statements.track | all\n pg_stat_statements.track_utility | on\n(4 rows)\n\npostgres=# select pg_stat_statements_reset();\n pg_stat_statements_reset\n--------------------------\n\n(1 row)\n\n\npostgres=# create table ctas as select * from pg_class;\nSELECT 386\npostgres=# select query,calls,rows from pg_stat_statements where query like\n'create table ctas%';\n query | calls | rows\n---------------------------------------------+-------+------\n create table ctas as select * from pg_class | 1 | 0\n(1 row)\n\nafter modifying the following line in pg_stat_statements.c\n\nrows = (qc && qc->commandTag == CMDTAG_COPY) ? qc->nprocessed : 0;\ninto\nrows = (qc && (qc->commandTag == CMDTAG_COPY \n || qc->commandTag == CMDTAG_SELECT)\n ) ? qc->nprocessed : 0;\n\ncolumn rows seems properly updated.\n\nWhat do you think about that fix ?\nThanks in advance\nRegards\nPAscal\n\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Sun, 15 Mar 2020 10:35:55 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "Same remark for syntax\n\nCREATE MATERIALIZED VIEW\n\nas well.\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Wed, 25 Mar 2020 15:39:03 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "\n\nOn 2020/03/16 2:35, legrand legrand wrote:\n> Hello,\n> it seems that column \"rows\" is not updated after CREATE TABLE AS SELECT\n> statements.\n> \n> pg13devel (snapshot 2020-03-14)\n> postgres=# select name,setting from pg_settings where name like 'pg_stat%';\n> name | setting\n> ----------------------------------+---------\n> pg_stat_statements.max | 5000\n> pg_stat_statements.save | on\n> pg_stat_statements.track | all\n> pg_stat_statements.track_utility | on\n> (4 rows)\n> \n> postgres=# select pg_stat_statements_reset();\n> pg_stat_statements_reset\n> --------------------------\n> \n> (1 row)\n> \n> \n> postgres=# create table ctas as select * from pg_class;\n> SELECT 386\n> postgres=# select query,calls,rows from pg_stat_statements where query like\n> 'create table ctas%';\n> query | calls | rows\n> ---------------------------------------------+-------+------\n> create table ctas as select * from pg_class | 1 | 0\n> (1 row)\n\nThanks for the report! Yeah, it seems worth improving this.\n\n> after modifying the following line in pg_stat_statements.c\n> \n> rows = (qc && qc->commandTag == CMDTAG_COPY) ? qc->nprocessed : 0;\n> into\n> rows = (qc && (qc->commandTag == CMDTAG_COPY\n> || qc->commandTag == CMDTAG_SELECT)\n> ) ? qc->nprocessed : 0;\n> \n> column rows seems properly updated.\n> \n> What do you think about that fix ?\n\nThe utility commands that return CMDTAG_SELECT are\nonly CREATE TABLE AS SELECT and CREATE MATERIALIZED VIEW?\nI'd just like to confirm that there is no case where \"rows\" must not\nbe counted when CMDTAG_SELECT is returned.\n\nBTW, \"rows\" should be updated when FETCH or MOVE is executed\nbecause each command returns or affects the rows?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Thu, 26 Mar 2020 21:41:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "Thank you for those answers !\n\n> The utility commands that return CMDTAG_SELECT are\n> only CREATE TABLE AS SELECT and CREATE MATERIALIZED VIEW?\n> I'd just like to confirm that there is no case where \"rows\" must not\n> be counted when CMDTAG_SELECT is returned.\n\nI don't have any in mind ...\n\n> BTW, \"rows\" should be updated when FETCH or MOVE is executed\n> because each command returns or affects the rows?\n\nYes they should, but they aren't yet (event with CMDTAG_SELECT added)\n\nNote that implicit cursors behave the same way ;o(\n\npostgres=# do $$ declare i integer; begin for i in (select 1 ) loop null;\nend loop;end; $$;\nDO\npostgres=# select calls,query,rows from pg_stat_statements;\n calls | query \n| rows\n-------+---------------------------------------------------------------------------------+------\n 1 | select pg_stat_statements_reset() \n| 1\n 1 | (select $1 ) \n| 0\n 1 | do $$ declare i integer; begin for i in (select 1 ) loop null; end\nloop;end; $$ | 0\n(3 rows)\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Fri, 27 Mar 2020 10:43:15 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "On 2020/03/28 2:43, legrand legrand wrote:\n> Thank you for those answers !\n> \n>> The utility commands that return CMDTAG_SELECT are\n>> only CREATE TABLE AS SELECT and CREATE MATERIALIZED VIEW?\n>> I'd just like to confirm that there is no case where \"rows\" must not\n>> be counted when CMDTAG_SELECT is returned.\n> \n> I don't have any in mind ...\n\nI found that SELECT INTO also returns CMDTAG_SELECT.\n\n> \n>> BTW, \"rows\" should be updated when FETCH or MOVE is executed\n>> because each command returns or affects the rows?\n> \n> Yes they should, but they aren't yet (event with CMDTAG_SELECT added)\n> \n> Note that implicit cursors behave the same way ;o(\n\nThanks for confirming this!\n\nAttached is the patch that makes pgss track the total number of rows\nretrieved or affected by CREATE TABLE AS, SELECT INTO,\nCREATE MATERIALIZED VIEW and FETCH. I think this is new feature\nrather than bug fix, so am planning to add this patch into next CommitFest\nfor v14. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 21 Apr 2020 00:42:48 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "Fujii Masao-4 wrote\n> Attached is the patch that makes pgss track the total number of rows\n> retrieved or affected by CREATE TABLE AS, SELECT INTO,\n> CREATE MATERIALIZED VIEW and FETCH. I think this is new feature\n> rather than bug fix, so am planning to add this patch into next CommitFest\n> for v14. Thought?\n\nThanks !\nmaybe v13, if this was considered as a bug that don't need backport ?\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-bugs-f2117394.html\n\n\n", "msg_date": "Tue, 21 Apr 2020 04:37:12 -0700 (MST)", "msg_from": "legrand legrand <legrand_legrand@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "\n\nOn 2020/04/21 20:37, legrand legrand wrote:\n> Fujii Masao-4 wrote\n>> Attached is the patch that makes pgss track the total number of rows\n>> retrieved or affected by CREATE TABLE AS, SELECT INTO,\n>> CREATE MATERIALIZED VIEW and FETCH. I think this is new feature\n>> rather than bug fix, so am planning to add this patch into next CommitFest\n>> for v14. Thought?\n> \n> Thanks !\n> maybe v13, if this was considered as a bug that don't need backport ?\n\nYeah, if many people think this is a bug, we can get it into v13.\nBut at least for me it looks like an improvement of the capability\nof pgss rather than a bug fix. If the document of pgss clearly\nexplained \"pgss tracks the total number of rows affect by even\nutility command ...\", I think that we can treat this as a bug. But...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 22 Apr 2020 12:17:11 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nThe patch applies cleanly and works as expected.", "msg_date": "Wed, 06 May 2020 13:49:23 +0000", "msg_from": "Asif Rehman <asifr.rehman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" }, { "msg_contents": "\n\nOn 2020/05/06 22:49, Asif Rehman wrote:\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n> \n> The patch applies cleanly and works as expected.\n\nThanks for the review and test!\nSince this patch was marked as Ready for Committer, I pushed it.\n\nRegards,\n\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 29 Jul 2020 23:24:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_stat_statements: rows not updated for CREATE TABLE AS SELECT\n statements" } ]
[ { "msg_contents": "While poking at Pavel's \"anycompatible\" patch, I found a couple\nmore pre-existing issues having to do with special cases for\nactual input type \"anyarray\". Ordinarily that would be impossible\nsince we should have resolved \"anyarray\" to some specific array\ntype earlier; but you can make it happen by applying a function\nto one of the \"anyarray\" columns of pg_statistic or pg_stats.\n\n* I can provoke an assertion failure thus:\n\nregression=# create function fooid(f1 anyarray) returns anyarray\nas 'select $1' language sql;\nCREATE FUNCTION\nregression=# select fooid(stavalues1) from pg_statistic;\nserver closed the connection unexpectedly\n\nThe server log shows\n\nTRAP: BadArgument(\"!IsPolymorphicType(rettype)\", File: \"functions.c\", Line: 1606)\npostgres: postgres regression [local] SELECT(ExceptionalCondition+0x55)[0x8e85c5]\npostgres: postgres regression [local] SELECT(check_sql_fn_retval+0x79b)[0x6664db]\n\nThe reason this happens is that the parser intentionally allows an\nactual argument type of anyarray to match a declared argument type\nof anyarray, whereupon the \"resolved\" function output type is also\nanyarray. We have some regression test cases that depend on that\nbehavior, so we can't just take it out. However, check_sql_fn_retval\nis assuming too much. In a non-assert build, what happens is\n\nERROR: 42P13: return type anyarray is not supported for SQL functions\nCONTEXT: SQL function \"fooid\" during inlining\nLOCATION: check_sql_fn_retval, functions.c:1888\n\nbecause the code after the assert properly rejects pseudotypes.\nThat seems fine, so I think it's sufficient to take out that assertion.\n(Note: all the PLs throw errors successfully, so it's just SQL-language\nfunctions with this issue.)\n\n* There's some inconsistency between these cases:\n\nregression=# create function foo1(f1 anyarray, f2 anyelement) returns bool\nlanguage sql as 'select $1[1] = f2';\nCREATE FUNCTION\nregression=# select foo1(stavalues1, 46) from pg_statistic;\nERROR: function foo1(anyarray, integer) does not exist\nLINE 1: select foo1(stavalues1, 46) from pg_statistic;\n ^\nHINT: No function matches the given name and argument types. You might need to add explicit type casts.\n\nregression=# create function foo2(f1 anyarray, f2 anyrange) returns bool\nlanguage sql as 'select $1[1] = lower(f2)';\nCREATE FUNCTION\nregression=# select foo2(stavalues1, int8range(42,46)) from pg_statistic;\nERROR: argument declared anyrange is not consistent with argument declared anyelement\nDETAIL: int8range versus anyelement\n\nThe reason for the inconsistency is that parse_coerce.c has special cases\nto forbid the combination of (a) matching an actual argument type of\nanyarray to a declared anyarray while (b) also having an anyelement\nargument. That's to prevent the risk that the actual array element type\ndoesn't agree with the other argument. But there's no similar restriction\nfor the combination of anyarray and anyrange arguments, which seems like\na clear oversight in the anyrange logic. On reflection, in fact, we\nshould not allow matching an actual-argument anyarray if there are *any*\nother pseudotype arguments, including another anyarray, because we can't\nguarantee that the two anyarrays will contain matching argument types.\n\nA rule like that would also justify not worrying about matching\nanyarray to anycompatiblearray (as Pavel's patch already doesn't,\nthough not for any documented reason). There is little point in using\nanycompatiblearray unless there's at least one other polymorphic argument\nto match it against, so if we're going to reject anyarray input in such\ncases, there's no situation where it's useful to allow that.\n\nHaving said that, I'm not sure whether to prefer the first error, which\nhappens because check_generic_type_consistency decides the function\ndoesn't match at all, or the second, where check_generic_type_consistency\naccepts the match and then enforce_generic_type_consistency spits up.\nThe second error isn't all that much better ... but maybe we could accept\nthe match and then make enforce_generic_type_consistency issue a more\non-point error? That would carry some risk of creating ambiguous-function\nerrors where there were none before, but I doubt it's a big problem.\nIf that change makes us unable to pick a function, the same failure\nwould occur for a similar call involving a regular array column.\n\nIn any case I'm inclined not to back-patch a fix for the second issue,\nsince all it's going to do is exchange one error for another. Not sure\nabout the assertion removal --- that wouldn't affect production builds,\nbut people doing development/testing on back branches might appreciate it.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Mar 2020 19:17:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "More weird stuff in polymorphic type resolution" } ]
[ { "msg_contents": "Hi,\n\nIt seems the comments on SharedHotStandbyActive and\nSharedRecoveryInProgress are the same in XLogCtlData.\n\nHow about modifying the comment on SharedHotStandbyActive?\n\nAttached a patch.\n\nRegards,\n--\nTorikoshi Atsushi", "msg_date": "Mon, 16 Mar 2020 10:57:34 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "comments on elements of xlogctldata" }, { "msg_contents": "\n\nOn 2020/03/16 10:57, Atsushi Torikoshi wrote:\n> Hi,\n> \n> It seems the comments on SharedHotStandbyActive and SharedRecoveryInProgress are the same in XLogCtlData.\n> \n> How about modifying the comment on SharedHotStandbyActive?\n> \n> Attached a patch.\n\nThanks for the report and patch!\nThe patch looks good to me.\nI will commit it.\n\nRegards,\n\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 16 Mar 2020 11:08:37 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: comments on elements of xlogctldata" }, { "msg_contents": "\n\nOn 2020/03/16 11:08, Fujii Masao wrote:\n> \n> \n> On 2020/03/16 10:57, Atsushi Torikoshi wrote:\n>> Hi,\n>>\n>> It seems the comments on SharedHotStandbyActive and SharedRecoveryInProgress are the same in XLogCtlData.\n>>\n>> How about modifying the comment on SharedHotStandbyActive?\n>>\n>> Attached a patch.\n> \n> Thanks for the report and patch!\n> The patch looks good to me.\n> I will commit it.\n\nPushed! Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:07:29 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: comments on elements of xlogctldata" } ]
[ { "msg_contents": "Hi,\n\nAs far as I read the reloptions.c, autovacuum_vacuum_cost_delay,\nautovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor\nare the members of relopt_real, so their type seems the same, real.\n\nBut the manual about storage parameters[1] says two of their type\nare float4 and the other is floating point.\n\n > autovacuum_vacuum_cost_delay, toast.autovacuum_vacuum_cost_delay\n(floating point)\n > autovacuum_vacuum_scale_factor, toast.autovacuum_vacuum_scale_factor\n(float4)\n > autovacuum_analyze_scale_factor (float4)\n\nAnd the manual about GUC says all these parameters are floating\npoint.\n\n > autovacuum_vacuum_cost_delay (floating point)\n > autovacuum_vacuum_scale_factor (floating point)\n > autovacuum_analyze_scale_factor (floating point)\n\nAlso other members of relopt_real such as seq_page_cost,\nrandom_page_cost and vacuum_cleanup_index_scale_factor are\ndocumented as floating point.\n\n\nI think using float4 on storage parameters[1] are not consistent\nso far, how about changing these parameters type from float4 to\nfloating point if there are no specific reasons using float4?\n\n\nAttached a patch.\nAny thought?\n\n[1]https://www.postgresql.org/docs/devel/sql-createtable.htm\n[2]https://www.postgresql.org/docs/devel/runtime-config-autovacuum.html\n\n\nRegards,\n--\nTorikoshi Atsushi", "msg_date": "Mon, 16 Mar 2020 11:07:37 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "type of some table storage params on doc" }, { "msg_contents": "On Mon, Mar 16, 2020 at 11:07:37AM +0900, Atsushi Torikoshi wrote:\n> As far as I read the reloptions.c, autovacuum_vacuum_cost_delay,\n> autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor\n> are the members of relopt_real, so their type seems the same, real.\n\nIn this case, the parsing uses parse_real(), which is exactly the same\ncode path as what real GUCs use.\n\n> But the manual about storage parameters[1] says two of their type\n> are float4 and the other is floating point.\n>\n> I think using float4 on storage parameters[1] are not consistent\n> so far, how about changing these parameters type from float4 to\n> floating point if there are no specific reasons using float4?\n\nThat's a good idea, so I am fine to apply your patch as float4 is a\ndata type. However, let's see first if others have more comments or\nobjections.\n--\nMichael", "msg_date": "Wed, 18 Mar 2020 15:32:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On 2020-Mar-18, Michael Paquier wrote:\n\n> On Mon, Mar 16, 2020 at 11:07:37AM +0900, Atsushi Torikoshi wrote:\n> > As far as I read the reloptions.c, autovacuum_vacuum_cost_delay,\n> > autovacuum_vacuum_scale_factor and autovacuum_analyze_scale_factor\n> > are the members of relopt_real, so their type seems the same, real.\n> \n> In this case, the parsing uses parse_real(), which is exactly the same\n> code path as what real GUCs use.\n> \n> > But the manual about storage parameters[1] says two of their type\n> > are float4 and the other is floating point.\n> >\n> > I think using float4 on storage parameters[1] are not consistent\n> > so far, how about changing these parameters type from float4 to\n> > floating point if there are no specific reasons using float4?\n> \n> That's a good idea, so I am fine to apply your patch as float4 is a\n> data type. However, let's see first if others have more comments or\n> objections.\n\nHmm. So unadorned 'floating point' seems to refer to float8; you have\nto use float(24) in order to get a float4. The other standards-mandated\nname for float4 seems to be REAL. (I had a look around but was unable\nto figure out whether the standard mandates exact bit widths other than\nthe precision spec). Since they're not doubles, what about we use REAL\nrather than FLOATING POINT?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:51:44 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Mar-18, Michael Paquier wrote:\n>> On Mon, Mar 16, 2020 at 11:07:37AM +0900, Atsushi Torikoshi wrote:\n>>>> In this case, the parsing uses parse_real(), which is exactly the same\n>>>> code path as what real GUCs use.\n\n> Hmm. So unadorned 'floating point' seems to refer to float8; you have\n> to use float(24) in order to get a float4. The other standards-mandated\n> name for float4 seems to be REAL. (I had a look around but was unable\n> to figure out whether the standard mandates exact bit widths other than\n> the precision spec). Since they're not doubles, what about we use REAL\n> rather than FLOATING POINT?\n\nIsn't this whole argument based on a false premise? What parse_real\nreturns is double, not float. Also notice that config.sgml consistently\ndocuments those GUCs as <type>floating point</type>. (I recall having\nrecently whacked some GUC descriptions that were randomly out of line\nwith that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:00:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On 2020-Mar-18, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Mar-18, Michael Paquier wrote:\n> >> On Mon, Mar 16, 2020 at 11:07:37AM +0900, Atsushi Torikoshi wrote:\n> >>>> In this case, the parsing uses parse_real(), which is exactly the same\n> >>>> code path as what real GUCs use.\n> \n> > Hmm. So unadorned 'floating point' seems to refer to float8; you have\n> > to use float(24) in order to get a float4. The other standards-mandated\n> > name for float4 seems to be REAL. (I had a look around but was unable\n> > to figure out whether the standard mandates exact bit widths other than\n> > the precision spec). Since they're not doubles, what about we use REAL\n> > rather than FLOATING POINT?\n> \n> Isn't this whole argument based on a false premise? What parse_real\n> returns is double, not float. Also notice that config.sgml consistently\n> documents those GUCs as <type>floating point</type>. (I recall having\n> recently whacked some GUC descriptions that were randomly out of line\n> with that.)\n\nAh, I hadn't checked -- I was taking the function and struct names at\nface value, but it turns out that they're lies as well -- parse_real,\nrelopt_real all parsing/storing doubles *is* confusing.\n\nThat being the case, I agree that \"float4\" is the wrong thing and\n\"floating point\" is what to use.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 14:12:36 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Ah, I hadn't checked -- I was taking the function and struct names at\n> face value, but it turns out that they're lies as well -- parse_real,\n> relopt_real all parsing/storing doubles *is* confusing.\n\nYeah, that's certainly true. I wonder if we could rename them\nwithout causing a lot of pain for extensions?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:13:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On 2020-Mar-18, Tom Lane wrote:\n\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > Ah, I hadn't checked -- I was taking the function and struct names at\n> > face value, but it turns out that they're lies as well -- parse_real,\n> > relopt_real all parsing/storing doubles *is* confusing.\n> \n> Yeah, that's certainly true. I wonder if we could rename them\n> without causing a lot of pain for extensions?\n\nI don't think it will, directly; debian.codesearch.net says only patroni\nand slony1-2 contain the \"parse_real\", and both have their own\nimplementations (patroni is Python anyway). I didn't find any\nrelopt_real anywhere.\n\nHowever, if we were to rename DefineCustomRealVariable() to match, that\nwould no doubt hurt a lot of people. We also have GucRealCheckHook and\nGucRealAssignHook typedefs, but those appear to hit no Debian package.\n(In guc.c, the fallout rabbit hole goes pretty deep, but that seems well\nlocalized.)\n\nI don't think the last pg13 CF is when to be spending time on this,\nthough.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 14:29:17 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On Wed, Mar 18, 2020 at 02:29:17PM -0300, Alvaro Herrera wrote:\n> I don't think it will, directly; debian.codesearch.net says only patroni\n> and slony1-2 contain the \"parse_real\", and both have their own\n> implementations (patroni is Python anyway). I didn't find any\n> relopt_real anywhere.\n\nReloptions in general are not used much in extensions, and one could\nassume that reloptions of type real (well, double) are even less.\n\n> However, if we were to rename DefineCustomRealVariable() to match, that\n> would no doubt hurt a lot of people. We also have GucRealCheckHook and\n> GucRealAssignHook typedefs, but those appear to hit no Debian package.\n> (In guc.c, the fallout rabbit hole goes pretty deep, but that seems well\n> localized.)\n\nI make use of this API myself, for some personal stuff, and even some\ninternal company stuff. And I am ready to bet that it is much more\npopular than its reloption cousin mainly for bgworkers. Hence a\nrename would need a compatibility layer remaining around. Honestly, I\nam not sure that a rename is worth it.\n\n> I don't think the last pg13 CF is when to be spending time on this,\n> though.\n\nIndeed.\n\nDo any of you have any arguments against the patch proposed upthread\nswitching \"float4\" to \"floating point\" then? Better be sure..\n--\nMichael", "msg_date": "Thu, 19 Mar 2020 11:41:24 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On 2020-Mar-19, Michael Paquier wrote:\n\n> Do any of you have any arguments against the patch proposed upthread\n> switching \"float4\" to \"floating point\" then? Better be sure..\n\nNone here.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 00:04:45 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On Thu, Mar 19, 2020 at 12:04:45AM -0300, Alvaro Herrera wrote:\n> None here.\n\nThanks Álvaro. Applied and back-patched down to 9.5 then.\n--\nMichael", "msg_date": "Mon, 23 Mar 2020 13:58:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: type of some table storage params on doc" }, { "msg_contents": "On Mon, Mar 23, 2020 at 1:58 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Mar 19, 2020 at 12:04:45AM -0300, Alvaro Herrera wrote:\n> > None here.\n>\n> Thanks Álvaro. Applied and back-patched down to 9.5 then.\n> --\n> Michael\n>\n\nThanks for applying the patch.\n\nRegards,\n\n--\n Atsushi Torikoshi\n\nOn Mon, Mar 23, 2020 at 1:58 PM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Mar 19, 2020 at 12:04:45AM -0300, Alvaro Herrera wrote:\n> None here.\n\nThanks Álvaro.  Applied and back-patched down to 9.5 then.\n--\nMichaelThanks for applying the patch.Regards,-- Atsushi Torikoshi", "msg_date": "Mon, 23 Mar 2020 16:28:04 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: type of some table storage params on doc" } ]
[ { "msg_contents": "Hi,\n\nThe current manual on CREATE TABLE[1] describes storage\nparameters with their types.\nBut manual on CREATE INDEX[2] describes storage parameters\nWITHOUT their types.\n\nI think it'll be better to add types to storage parameters\non CREATE INDEX for the consistency.\n\nAttached a patch.\nAny thought?\n\n[1]\nhttps://www.postgresql.org/docs/devel/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n[2]\nhttps://www.postgresql.org/docs/devel/sql-createindex.html#SQL-CREAT`INDEX-STORAGE-PARAMETERS\n<https://www.postgresql.org/docs/devel/sql-createindex.html#SQL-CREATINDEX-STORAGE-PARAMETERS>\n\nRegards,\n--\nTorikoshi Atsushi", "msg_date": "Mon, 16 Mar 2020 11:09:49 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "add types to index storage params on doc" }, { "msg_contents": "\n\nOn 2020/03/16 11:09, Atsushi Torikoshi wrote:\n> Hi,\n> \n> The current manual on CREATE TABLE[1] describes storage\n> parameters with their types.\n> But manual on CREATE INDEX[2] describes storage parameters\n> WITHOUT their types.\n> \n> I think it'll be better to add types to storage parameters\n> on CREATE INDEX for the consistency.\n> \n> Attached a patch.\n> Any thought?\n\nThanks for the patch! It basically looks good to me.\n\n- <term><literal>buffering</literal>\n+ <term><literal>buffering</literal> (<type>string</type>)\n\nIsn't it better to use \"enum\" rather than \"string\"?\nIn the docs about enum GUC parameters, \"enum\" is used there.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 16 Mar 2020 11:49:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: add types to index storage params on doc" }, { "msg_contents": "On Sun, Mar 15, 2020 at 7:10 PM Atsushi Torikoshi <atorik@gmail.com> wrote:\n> I think it'll be better to add types to storage parameters\n> on CREATE INDEX for the consistency.\n\nSeems reasonable to me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 15 Mar 2020 20:19:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: add types to index storage params on doc" }, { "msg_contents": "Thanks for your comments!\n\nOn Mon, Mar 16, 2020 at 11:49 AM Fujii Masao <masao.fujii@oss.nttdata.com>\nwrote:\n\n> - <term><literal>buffering</literal>\n> + <term><literal>buffering</literal> (<type>string</type>)\n>\n> Isn't it better to use \"enum\" rather than \"string\"?\n> In the docs about enum GUC parameters, \"enum\" is used there.\n>\n\nAgreed. I've fixed it to \"enum\".\n\nBut I'm now wondering about the type of check_option[3], [4].\nBecause I decide the type to \"string\" referring to check_option, which is\nthe other element of enumRelOpts in reloptions.c.\n\nShould I also change it to \"enum\"?\n\n[3]https://www.postgresql.org/docs/devel/sql-alterview.html#id-1.9.3.45.6\n[4]https://www.postgresql.org/docs/devel/sql-createview.html#id-1.9.3.97.6\n\nRegards,\n--\nTorikoshi Atsushi", "msg_date": "Mon, 16 Mar 2020 16:50:32 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add types to index storage params on doc" }, { "msg_contents": "On 2020-Mar-16, Atsushi Torikoshi wrote:\n\n> Thanks for your comments!\n> \n> On Mon, Mar 16, 2020 at 11:49 AM Fujii Masao <masao.fujii@oss.nttdata.com>\n> wrote:\n> \n> > - <term><literal>buffering</literal>\n> > + <term><literal>buffering</literal> (<type>string</type>)\n> >\n> > Isn't it better to use \"enum\" rather than \"string\"?\n> > In the docs about enum GUC parameters, \"enum\" is used there.\n> \n> Agreed. I've fixed it to \"enum\".\n> \n> But I'm now wondering about the type of check_option[3], [4].\n> Because I decide the type to \"string\" referring to check_option, which is\n> the other element of enumRelOpts in reloptions.c.\n> \n> Should I also change it to \"enum\"?\n\nYeah, these were strings until recently (commit 773df883e8f7 Sept 2019).\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 11:32:50 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: add types to index storage params on doc" }, { "msg_contents": "On Mon, Mar 16, 2020 at 11:32 PM Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\n>\n> > Should I also change it to \"enum\"?\n>\n> Yeah, these were strings until recently (commit 773df883e8f7 Sept 2019).\n>\n\nThanks!\n\nAttached a patch for manuals on create and alter view.\n\ncheck_option is also described in information_schema.sgml\nand its type is 'character_data', but as far as I read\ninformation_schema.sql this is correct and it seems no\nneed for modification here.\n\nRegards,\n--\nTorikoshi Atsushi", "msg_date": "Tue, 17 Mar 2020 14:52:18 +0900", "msg_from": "Atsushi Torikoshi <atorik@gmail.com>", "msg_from_op": true, "msg_subject": "Re: add types to index storage params on doc" }, { "msg_contents": "\n\nOn 2020/03/17 14:52, Atsushi Torikoshi wrote:\n> On Mon, Mar 16, 2020 at 11:32 PM Alvaro Herrera <alvherre@2ndquadrant.com <mailto:alvherre@2ndquadrant.com>> wrote:\n> \n> \n> > Should I also change it to \"enum\"?\n> \n> Yeah, these were strings until recently (commit 773df883e8f7 Sept 2019).\n> \n> \n> Thanks!\n> \n> Attached a patch for manuals on create and alter view.\n> \n> check_option is also described in information_schema.sgml\n> and its type is 'character_data', but as far as I read\n> information_schema.sql this is correct and it seems no\n> need for modification here.\n\nThanks for the patch!\nI pushed two patches that you posted in this thread. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 18 Mar 2020 18:31:08 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: add types to index storage params on doc" } ]
[ { "msg_contents": "Hello, hackers.\n\n------ ABSTRACT ------\nThere is a race condition between btree_xlog_unlink_page and _bt_walk_left.\nA lot of versions are affected including 12 and new-coming 13.\nHappens only on standby. Seems like could not cause invalid query results.\n\n------ REMARK ------\nWhile working on support for index hint bits on standby [1] I have\nstarted to getting\n\"ERROR: could not find left sibling of block XXXX in index XXXX\"\nduring stress tests.\n\nI was sure I have broken something in btree and spent a lot of time\ntrying to figure what.\nAnd later... I realized what it is bug in btree since a very old times...\nBecause of much faster scans with LP_DEAD support on a standby it\nhappens much more frequently in my case.\n\n------ HOW TO REPRODUCE ------\nIt is not easy to reproduce the issue but you can try (tested on\nREL_12_STABLE and master):\n\n1) Setup master (sync replica and 'remote_apply' are not required -\njust make scripts simpler):\nautovacuum = off\nsynchronous_standby_names = '*'\nsynchronous_commit = 'remote_apply'\n\n2) Setup standby:\nprimary_conninfo = 'user=postgres host=127.0.0.1 port=5432\nsslmode=prefer sslcompression=0 gssencmode=prefer krbsrvname=postgres\ntarget_session_attrs=any'\nport = 6543\n\n3) Prepare pgbench file with content (test.bench):\nBEGIN;\nselect * from pgbench_accounts order by aid desc limit 1;\nEND;\n\n4) Prepare the index:\n./pgbench -i -s 10 -U postgres\n./psql -U postgres -c \"delete from pgbench_accounts where aid IN\n(select aid from pgbench_accounts order by aid desc limit 500000)\"\n\n5) Start index scans on the standby:\n./pgbench -f test.bench -j 1 -c ${NUMBER_OF_CORES} -n -P 1 -T 10000 -U\npostgres -p 6543\n\n6) Run vacuum on the master:\n./psql -U postgres -c \"vacuum pgbench_accounts\"\n\n7) You should see something like this:\n> progress: 1.0 s, 5.0 tps, lat 614.530 ms stddev 95.902\n> .....\n> progress: 5.0 s, 10.0 tps, lat 508.561 ms stddev 82.338\n> client 3 script 0 aborted in command 1 query 0: ERROR: could not find left sibling of block 1451 in index \"pgbench_accounts_pkey\"\n> progress: 6.0 s, 47.0 tps, lat 113.001 ms stddev 55.709\n> .....\n> progress: 12.0 s, 84.0 tps, lat 48.451 ms stddev 7.238\n> client 2 script 0 aborted in command 1 query 0: ERROR: could not find left sibling of block 2104 in index \"pgbench_accounts_pkey\"\n> progress: 13.0 s, 87.0 tps, lat 39.338 ms stddev 5.417\n> .....\n> progress: 16.0 s, 158.0 tps, lat 18.988 ms stddev 3.251\n> client 4 script 0 aborted in command 1 query 0: ERROR: could not find left sibling of block 2501 in index \"pgbench_accounts_pkey\"\n\nI was able to reproduce issue with vanilla PG_12 on different systems\nincluding the Windows machine.\nOn some servers it happens 100% times. On others - very seldom.\n\nIt is possible to radically increase chance to reproduce the issue by\nadding a sleep in btree_xlog_unlink_page[7]:\n> + pg_usleep(10 * 1000L);\n>\n> /* Rewrite target page as empty deleted page */\n> buffer = XLogInitBufferForRedo(record, 0);\n\n------ WHAT HAPPENS ------\nIt is race condition issue between btree_xlog_unlink_page and _bt_walk_left.\n\nbtree_xlog_unlink_page removes page from btree changing btpo_prev and\nbtpo_next of its left and right siblings to point\nto the each other and marks target page as removed. All these\noperations are done using one-page-at-a-time locking because of[4]:\n\n> * In normal operation, we would lock all the pages this WAL record\n> * touches before changing any of them. In WAL replay, it should be okay\n> * to lock just one page at a time, since no concurrent index updates can\n> * be happening, and readers should not care whether they arrive at the\n> * target page or not (since it's surely empty).\n\n_bt_walk_left walks left in very tricky way. Please refer to\nsrc/backend/access/nbtree/README for details[5]:\n\n> Moving left in a backward scan is complicated because we must consider\n> the possibility that the left sibling was just split (meaning we must find\n> the rightmost page derived from the left sibling), plus the possibility\n> that the page we were just on has now been deleted and hence isn't in the\n> sibling chain at all anymore.\n\nSo, this is how race is happens:\n\n0) this is target page (B) and its siblings.\nA <---> B <---> C ---> END\n\n1) walreceiver starts btree_xlog_unlink_page for the B. It is changes\nthe links from C to A and from A to C (I hope my scheme will be\ndisplayed correctly):\nA <---- B ----> C ---> END\n^ ^\n \\_____________/\n\nBut B is not marked as BTP_DELETED yet - walreceiver stops at nbtxlog:697[2].\n\n2) other backend starts _bt_walk_left from B.\nIt checks A, goes to from A to C by updated btpo_next and later sees\nend of the btree.\nSo, next step is to check if B was deleted (nbtsearch:2011)[3] and try\nto recover.\n\nBut B is not yet deleted! It will be marked as BTP_DELETED after a few\nmillis by walreceiver but not yet.\nSo, nbtsearch:2046[6] is happens.\n\n\n------ HOW TO FIX ------\nThe first idea was to mark page as BTP_DELETED before updating siblings links.\n\nSecond - to update pages in the following order:\n* change btpo_next\n* mark as BTP_DELETED\n* change btpo_prev\n\nSuch a changes fix the exactly described above race condition... but\ncause a more tricky ones to start happening.\nAnd I think it is better to avoid any too complex unclear solutions here..\n\nAnother idea - to sleep a little waiting walreceiver to mark the page\nas deleted. But it seems to feel too ugly. Also it is unclear how long\nto wait.\n\nSo, I think right way is to lock all three pages as it is done on the\nprimary. As far as I can see it is not causes any real performance\nregression.\n\nPatch is attached (on top of REL_12_STABLE).\n\nThanks,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/flat/CANtu0ohOvgteBYmCMc2KERFiJUvpWGB0bRTbK_WseQH-L1jkrQ%40mail.gmail.com\n[2]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/nbtxlog.c#L697\n[3]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/nbtsearch.c#L2011\n[4]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/nbtxlog.c#L575-L581\n[5]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/README#L289-L314\n[6]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/nbtsearch.c#L2046\n[7]: https://github.com/postgres/postgres/blob/REL_12_STABLE/src/backend/access/nbtree/nbtxlog.c#L696", "msg_date": "Mon, 16 Mar 2020 17:07:53 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Btree BackwardScan race condition on Standby during VACUUM" }, { "msg_contents": "Hi Michail!\n\nVery interesting bug.\n\n> 16 марта 2020 г., в 19:07, Michail Nikolaev <michail.nikolaev@gmail.com> написал(а):\n> \n> So, I think right way is to lock all three pages as it is done on the\n> primary. As far as I can see it is not causes any real performance\n> regression.\n\nIt seems to me that it's exactly the same check that I was trying to verify in amcheck patch [0].\nBut there it was verified inside amcheck, but here it is verified by index scan.\n\nBasically, one cannot check that two vice-versa pointers are in agreement without locking both.\nAs a result, they must be changed under lock too.\n\nIn my view, lock coupling is necessary here. I'm not sure we really need to lock three pages though.\n\nIs there a reason why concurrency protocol on standby should not be exactly the same as on primary?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://commitfest.postgresql.org/24/2254/\n\n", "msg_date": "Tue, 17 Mar 2020 10:20:11 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Mon, Mar 16, 2020 at 7:08 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> ------ ABSTRACT ------\n> There is a race condition between btree_xlog_unlink_page and _bt_walk_left.\n> A lot of versions are affected including 12 and new-coming 13.\n> Happens only on standby. Seems like could not cause invalid query results.\n\n(CC'ing Heikki, just in case.)\n\nGood catch! I haven't tried to reproduce the problem here just yet,\nbut your explanation is very easy for me to believe.\n\nAs you pointed out, the best solution is likely to involve having the\nstandby imitate the buffer lock acquisitions that take place on the\nprimary. We don't do that for page splits and page deletions. I think\nthat it's okay in the case of page splits, since we're only failing to\nperform the same bottom-up lock coupling (I added something about that\nspecific thing to the README recently). Even btree_xlog_unlink_page()\nwould probably be safe if we didn't have to worry about backwards\nscans, which are really a special case. But we do.\n\nFWIW, while I agree that this issue is more likely to occur due to the\neffects of commit 558a9165, especially when running your test case, my\nown work on B-Tree indexes for Postgres 12 might also be a factor. I\nwon't get into the reasons now, since they're very subtle, but I have\nobserved that the Postgres 12 work tends to make page deletion occur\nfar more frequently with certain workloads. This was really obvious\nwhen I examined the structure of B-Tree indexes over many hours while\nBenchmarkSQL/TPC-C [1] ran, for example.\n\n[1] https://github.com/petergeoghegan/benchmarksql\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:30:14 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Mon, Mar 16, 2020 at 10:20 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n> It seems to me that it's exactly the same check that I was trying to verify in amcheck patch [0].\n> But there it was verified inside amcheck, but here it is verified by index scan.\n\nMaybe we can accept your patch after fixing this bug. My objection to\nthe patch was that it couples locks in a way that's not compatible\nwith btree_xlog_unlink_page(). But the problem now seems to have been\nbtree_xlog_unlink_page() itself. It's possible that there are problems\nelsewhere, but my recollection is that btree_xlog_unlink_page() was\nthe problem.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:37:57 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "\n\n> 18 марта 2020 г., в 00:37, Peter Geoghegan <pg@bowt.ie> написал(а):\n> \n> On Mon, Mar 16, 2020 at 10:20 PM Andrey M. Borodin <x4mmm@yandex-team.ru> wrote:\n>> It seems to me that it's exactly the same check that I was trying to verify in amcheck patch [0].\n>> But there it was verified inside amcheck, but here it is verified by index scan.\n> \n> Maybe we can accept your patch after fixing this bug. My objection to\n> the patch was that it couples locks in a way that's not compatible\n> with btree_xlog_unlink_page(). But the problem now seems to have been\n> btree_xlog_unlink_page() itself. It's possible that there are problems\n> elsewhere, but my recollection is that btree_xlog_unlink_page() was\n> the problem.\n\nThe problem was that btree_xlog_split() and btree_xlog_unlink_page() do not couple locks during fixing left links.\nProbably, patch in this thread should fix this in btree_xlog_split() too?\n\nBest regards, Andrey Borodin.\n\n", "msg_date": "Sun, 22 Mar 2020 15:34:38 +0500", "msg_from": "\"Andrey M. Borodin\" <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "Hello.\n\n> Probably, patch in this thread should fix this in btree_xlog_split() too?\n\nI have spent some time trying to find any possible race condition\nbetween btree_xlog_split and _bt_walk_left… But I can’t find any.\nAlso, I have tried to cause any issue by putting pg_sleep put into\nbtree_xlog_split (between releasing and taking of locks) but without\nany luck.\n\nI agree it is better to keep the same locking logic for primary and\nstandby in general. But it is a possible scope of another patch.\n\nThanks,\nMichail.\n\n\n", "msg_date": "Fri, 27 Mar 2020 18:58:38 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Fri, Mar 27, 2020 at 8:58 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> I have spent some time trying to find any possible race condition\n> between btree_xlog_split and _bt_walk_left… But I can’t find any.\n> Also, I have tried to cause any issue by putting pg_sleep put into\n> btree_xlog_split (between releasing and taking of locks) but without\n> any luck.\n\nI pushed a commit that tries to clear up some of the details around\nhow locking works during page splits. See commit 9945ad6e.\n\n> I agree it is better to keep the same locking logic for primary and\n> standby in general. But it is a possible scope of another patch.\n\nIt seems useful, but only up to a point. We don't need to hold locks\nacross related atomic operations (i.e. across each phase of a page\nsplit or page deletion). In particular, the lock coupling across page\nlevels that we perform on the primary when ascending the tree\nfollowing a page split doesn't need to occur on standbys. I added\nsomething about this to the nbtree README in commit 9f83468b353.\n\nI'm not surprised that you didn't find any problems in\nbtree_xlog_split(). It is already conservative about locking the\nsibling/child pages. It could hardly be more conservative (though see\nthe code and comments at the end of btree_xlog_split(), which mention\nlocking and backwards scans directly).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Mar 2020 18:45:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Mon, Mar 16, 2020 at 7:08 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> I was sure I have broken something in btree and spent a lot of time\n> trying to figure what.\n> And later... I realized what it is bug in btree since a very old times...\n> Because of much faster scans with LP_DEAD support on a standby it\n> happens much more frequently in my case.\n\nOn second thought, I wonder how commit 558a9165 could possibly be\nrelevant here. nbtree VACUUM doesn't care about the LP_DEAD bit at\nall. Sure, btree_xlog_delete_get_latestRemovedXid() is not going to\nhave to run on the standby on Postgres 12, but that only ever happened\nat the point where we might have to split the page on the primary\n(i.e. when _bt_delitems_delete() is called on the primary) anyway.\n_bt_delitems_delete()/btree_xlog_delete_get_latestRemovedXid() are not\nrelated to page deletion by VACUUM.\n\nIt's true that VACUUM will routinely kill tuples that happen to have\ntheir LP_DEAD bit set, but it isn't actually influenced by the fact\nthat somebody set (or didn't set) any tuple's LP_DEAD bit. VACUUM has\nits own strategy for generating recovery conflicts (it relies on\nconflicts generated during the pruning phase of heap VACUUMing).\nVACUUM is not willing to generate ad-hoc conflicts (in the style of\n_bt_delitems_delete()) just to kill a few more tuples in relatively\nuncommon cases -- cases where some LP_DEAD bits were set after a\nVACUUM process started, but before the VACUUM process reached an\naffected (LP_DEAD bits set) leaf page.\n\nAgain, I suspect that the problem is more likely to occur on Postgres\n12 in practice because page deletion is more likely to occur on that\nversion. IOW, due to my B-Tree work for Postgres 12: commit dd299df8,\nand related commits. That's probably all that there is to it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 27 Mar 2020 20:17:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "Hello, Peter.\n\n> I added\n> something about this to the nbtree README in commit 9f83468b353.\n\nI have added some updates to your notes in the updated patch version.\n\nI also was trying to keep the original wrapping of the paragraph, so\nthe patch looks too wordy.\n\nThanks,\nMichail.", "msg_date": "Sun, 5 Apr 2020 20:04:32 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "Hi Michail,\n\nOn Sun, Apr 5, 2020 at 10:04 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> > I added\n> > something about this to the nbtree README in commit 9f83468b353.\n>\n> I have added some updates to your notes in the updated patch version.\n\nMy apologies for the extended delay here.\n\nMy intention is to commit this patch to the master branch only. While\nit seems low risk, I don't see any reason to accept even a small risk\ngiven the lack of complaints from users. We know that this bug existed\nmany years before you discovered it.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 8 Jul 2020 14:55:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "Hello, Peter.\n\nThanks for the update.\n\nYes, it is the right decision.\nI have started to spot that bug only while working on a faster scan\nusing hint bits on replicas [1], so it is unlikely to hit it in\nproduction at the moment.\n\nThanks,\nMichail.\n\n[1]: https://www.postgresql.org/message-id/CANtu0ojmkN_6P7CQWsZ%3DuEgeFnSmpCiqCxyYaHnhYpTZHj7Ubw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 10 Jul 2020 23:25:15 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Mon, Mar 16, 2020 at 7:08 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> While working on support for index hint bits on standby [1] I have\n> started to getting\n> \"ERROR: could not find left sibling of block XXXX in index XXXX\"\n> during stress tests.\n\nI reproduced the bug using your steps (including the pg_usleep() hack)\ntoday. It was fairly easy to confirm the problem.\n\nAttached is a revised version of your patch. It renames the buffer\nvariable names, and changes the precise order in which the locks are\nreleased (for consistency with _bt_unlink_halfdead_page()). It also\nchanges the comments, and adds a new paragraph to the README. The\nexisting paragraph was about cross-level differences, this new one is\nabout same-level differences (plus a second new paragraph to talk\nabout backwards scans + page deletion).\n\nThis revised version is essentially the same as your original patch --\nI have only made superficial adjuments. I think that I will be able to\ncommit this next week, barring objections.\n\n--\nPeter Geoghegan", "msg_date": "Sat, 1 Aug 2020 11:30:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "> On 1 Aug 2020, at 20:30, Peter Geoghegan <pg@bowt.ie> wrote:\n\n> This revised version is essentially the same as your original patch --\n> I have only made superficial adjuments. I think that I will be able to\n> commit this next week, barring objections.\n\nAs we're out of time for the July CF where this is registered, I've moved this\nto 2020-09. Based on the above comment, I've marked it Ready for Committer.\n\ncheers ./daniel\n\n", "msg_date": "Sat, 1 Aug 2020 23:22:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "Hello, Peter.\n\n> Attached is a revised version of your patch\nThanks for your work, the patch is looking better now.\n\nMichail.\n\n\n", "msg_date": "Sun, 2 Aug 2020 19:07:33 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" }, { "msg_contents": "On Sun, Aug 2, 2020 at 9:07 AM Michail Nikolaev\n<michail.nikolaev@gmail.com> wrote:\n> Thanks for your work, the patch is looking better now.\n\nPushed -- thanks!\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 3 Aug 2020 15:55:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Btree BackwardScan race condition on Standby during\n VACUUM" } ]
[ { "msg_contents": "Hi,\n\nI don't know if this is a bug or the intended mode,\nbut since ODBC works and JDBC does not, I would ask why JDBC prepared \ninsert does not work if ODBC prepared insert works\nin case some varchar field contains 0x00 and DB is SQL_ASCII?\n\nMy environment:\n[root @ vlada-home ~ 16:32:56] $ psql -h localhost -U vlada \nasoftdev-ispp-pprostor-beograd\npsql (13devel)\nType \"help\" for help.\nasoftdev-ispp-pprostor-beograd = # \\ l asoftdev-ispp-pprostor-beograd\n                                          List of databases\n               Name | Owner | Encoding | Collate | Ctype | Access privileges\n-------------------------------- + ------- + --------- - + ------------- \n+ ------------- + -------------------\n  asoftdev-ispp-pprostor-belgrade | vlada | SQL_ASCII | en_US.UTF-8 | \nen_US.UTF-8 |\n(1 row)\nasoftdev-ispp-pprostor-beograd = # select version ();\n                                     version\n-------------------------------------------------- \n------------------------------\n  PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) \n9.2.0, 64-bit\n(1 row)\nasoftdev-ispp-pprostor-beograd = # \\ q\n[root @ vlada-home ~ 4:34:16 PM] $\n\ncat /root/pglog/postgresql-2020-03-16_000000.log\n---\n2020-03-16 16:04:18 CET [unknown] 127.0.0.1 (39582) 36708 0 00000 LOG: \nconnection received: host = 127.0.0.1 port = 39582\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: connection authorized: user = vlada database = \nasoftdev-ispp-pprostor-beograd\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.283 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.017 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: execute <unnamed>: SET extra_float_digits = 3\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.046 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.029 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.010 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: execute <unnamed>: SET application_name = 'PostgreSQL \nJDBC Driver'\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.043 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.094 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.019 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: execute <unnamed>: BEGIN\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.201 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.041 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.023 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: execute <unnamed>: SET CLIENT_ENCODING TO SQL_ASCII\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 0.061 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 2.372 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: duration: 1.788 ms\n2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: execute <unnamed>: select keyfield, record from \npublic.ispp_group order by keyfield\n...\n2020-03-16 16:04:41 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 137187 22021 ERROR: invalid byte sequence for encoding \n\"SQL_ASCII\": 0x00\n2020-03-16 16:04:41 CET asoftdev-ispp-pprostor-belgrade 127.0.0.1 \n(39582) 36708 137187 22021 STATEMENT: INSERT INTO \nSekretariat_2019.ispp_promene VALUES ($1, $2, $3, $4, $5, $6, $7. $8, \n$9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19, $20, $21, $22, \n$23, $24, $25, $26, $27, $28, $29, $30, $31)\n2020-03-16 16:05:15 CET asoftdev-ispp-pprostor-beograd 127.0.0.1 (39582) \n36708 0 00000 LOG: disconnection: session time: 0: 00: 57.749 user = \nvlada database = asoftdev-ispp-pprostor- beograd host = 127.0.0.1 port = \n39582\n\nDon't ask me why DB is SQL_ASCII!\n\nVladimir Kokovic, DP senior (69)\nSerbia, Belgrade, March 16, 2020\n\n\n\n\n\n\n\n\nHi,\n\n I don't know if this is a bug or the intended mode,\n but since ODBC works and JDBC does not, I would ask why JDBC\n prepared insert does not work if ODBC prepared insert works\n in case some varchar field contains 0x00 and DB is SQL_ASCII?\n\n My environment:\n [root @ vlada-home ~ 16:32:56] $ psql -h localhost -U vlada\n asoftdev-ispp-pprostor-beograd\n psql (13devel)\n Type \"help\" for help.\n asoftdev-ispp-pprostor-beograd = # \\ l\n asoftdev-ispp-pprostor-beograd\n                                          List of databases\n               Name | Owner | Encoding | Collate | Ctype | Access\n privileges\n -------------------------------- + ------- + --------- - +\n ------------- + ------------- + -------------------\n  asoftdev-ispp-pprostor-belgrade | vlada | SQL_ASCII |\n en_US.UTF-8 | en_US.UTF-8 |\n (1 row)\n asoftdev-ispp-pprostor-beograd = # select version ();\n                                     version\n --------------------------------------------------\n ------------------------------\n  PostgreSQL 13devel on x86_64-pc-linux-gnu, compiled by gcc\n (GCC) 9.2.0, 64-bit\n (1 row)\n asoftdev-ispp-pprostor-beograd\n = # \\ q\n [root @ vlada-home ~ 4:34:16 PM] $\n\n cat /root/pglog/postgresql-2020-03-16_000000.log\n ---\n 2020-03-16 16:04:18 CET [unknown] 127.0.0.1 (39582) 36708 0\n 00000 LOG: connection received: host = 127.0.0.1 port = 39582\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: connection authorized: user = vlada\n database = asoftdev-ispp-pprostor-beograd\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.283 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.017 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: execute <unnamed>: SET\n extra_float_digits = 3\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.046 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.029 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.010 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: execute <unnamed>: SET\n application_name = 'PostgreSQL JDBC Driver'\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.043 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.094 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.019 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: execute <unnamed>: BEGIN\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.201 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.041 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.023 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: execute <unnamed>: SET\n CLIENT_ENCODING TO SQL_ASCII\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 0.061 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 2.372 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: duration: 1.788 ms\n 2020-03-16 16:04:18 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: execute <unnamed>: select\n keyfield, record from public.ispp_group order by keyfield\n ...\n 2020-03-16 16:04:41 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 137187 22021 ERROR: invalid byte sequence for\n encoding \"SQL_ASCII\": 0x00\n 2020-03-16 16:04:41 CET asoftdev-ispp-pprostor-belgrade\n 127.0.0.1 (39582) 36708 137187 22021 STATEMENT: INSERT INTO\n Sekretariat_2019.ispp_promene VALUES ($1, $2, $3, $4, $5, $6,\n $7. $8, $9, $10, $11, $12, $13, $14, $15, $16, $17, $18, $19,\n $20, $21, $22, $23, $24, $25, $26, $27, $28, $29, $30, $31)\n 2020-03-16 16:05:15 CET asoftdev-ispp-pprostor-beograd 127.0.0.1\n (39582) 36708 0 00000 LOG: disconnection: session time: 0: 00:\n 57.749 user = vlada database = asoftdev-ispp-pprostor- beograd\n host = 127.0.0.1 port = 39582\n\n Don't ask me why DB is SQL_ASCII!\n\n Vladimir Kokovic, DP senior (69)\n Serbia, Belgrade, March 16, 2020", "msg_date": "Mon, 16 Mar 2020 17:04:35 +0100", "msg_from": "=?UTF-8?Q?gmail_Vladimir_Kokovi=c4=87?= <vladimir.kokovic@gmail.com>", "msg_from_op": true, "msg_subject": "JDBC prepared insert and X00 and SQL_ASCII" }, { "msg_contents": "Hi,\n\n\nAfter a thorough Java-Swig-libpq test, I can confirm that INSERT/SELECT \nis working properly:\n1. server_encoding: SQL_ASCII\n2. client_encoding: SQL_ASCII\n3. INSERT / SELECT java string with x00\n\n\nlibpq, psql - everything is OK !\n\n\nVladimir Kokovic, DP senior (69)\nSerbia, Belgrade, March 18, 2020\n\nOn 16.3.20. 17:04, gmail Vladimir Koković wrote:\n>\n> Hi,\n>\n> I don't know if this is a bug or the intended mode,\n> but since ODBC works and JDBC does not, I would ask why JDBC prepared \n> insert does not work if ODBC prepared insert works\n> in case some varchar field contains 0x00 and DB is SQL_ASCII?\n>\n\n\n\n\n\n\n\nHi,\n\nAfter\n a thorough Java-Swig-libpq test, I can confirm that\n INSERT/SELECT is working properly:\n1. server_encoding: SQL_ASCII\n2. client_encoding: SQL_ASCII\n3. INSERT / SELECT java string with\n x00\n\nlibpq, psql - everything is OK\n !\n\n\n\nVladimir\n Kokovic, DP senior (69)\n Serbia, Belgrade, March 18, 2020\nOn 16.3.20. 17:04, gmail Vladimir\n Koković wrote:\n\n\n\nHi,\n\n I don't know if this is a bug or the intended mode,\n but since ODBC works and JDBC does not, I would ask why JDBC\n prepared insert does not work if ODBC prepared insert works\n in case some varchar field contains 0x00 and DB is SQL_ASCII?", "msg_date": "Wed, 18 Mar 2020 13:56:02 +0100", "msg_from": "=?UTF-8?Q?gmail_Vladimir_Kokovi=c4=87?= <vladimir.kokovic@gmail.com>", "msg_from_op": true, "msg_subject": "Re: JDBC prepared insert and X00 and SQL_ASCII" }, { "msg_contents": "On Wed, 18 Mar 2020 at 08:56, gmail Vladimir Koković <\nvladimir.kokovic@gmail.com> wrote:\n\n> Hi,\n>\n>\n> After a thorough Java-Swig-libpq test, I can confirm that INSERT/SELECT is\n> working properly:\n> 1. server_encoding: SQL_ASCII\n> 2. client_encoding: SQL_ASCII\n> 3. INSERT / SELECT java string with x00\n>\n>\n> libpq, psql - everything is OK !\n>\n>\n> Vladimir Kokovic, DP senior (69)\n> Serbia, Belgrade, March 18, 2020\n> On 16.3.20. 17:04, gmail Vladimir Koković wrote:\n>\n> Hi,\n>\n> I don't know if this is a bug or the intended mode,\n> but since ODBC works and JDBC does not, I would ask why JDBC prepared\n> insert does not work if ODBC prepared insert works\n> in case some varchar field contains 0x00 and DB is SQL_ASCII?\n>\n>\n>\nI responded on the github issue, but you cannot simply change the client\nencoding for the JDBC driver. This is not allowed even though there is a\nsetting for allowClientEncodingChanges this is for specific purpose\n\n*When using the V3 protocol the driver monitors changes in certain server\nconfiguration parameters that should not be touched by end users.\nThe client_encoding setting is set by the driver and should not be altered.\nIf the driver detects a change it will abort the connection. There is one\nlegitimate exception to this behaviour though, using the COPY command on a\nfile residing on the server's filesystem. The only means of specifying the\nencoding of this file is by altering the client_encoding setting. The JDBC\nteam considers this a failing of the COPY command and hopes to provide an\nalternate means of specifying the encoding in the future, but for now there\nis this URL parameter. Enable this only if you need to override the client\nencoding when doing a copy.*\n\nDave Cramer\nwww.postgres.rocks\n\nOn Wed, 18 Mar 2020 at 08:56, gmail Vladimir Koković <vladimir.kokovic@gmail.com> wrote:\n\nHi,\n\nAfter\n a thorough Java-Swig-libpq test, I can confirm that\n INSERT/SELECT is working properly:\n1. server_encoding: SQL_ASCII\n2. client_encoding: SQL_ASCII\n3. INSERT / SELECT java string with\n x00\n\nlibpq, psql - everything is OK\n !\n\n\n\nVladimir\n Kokovic, DP senior (69)\n Serbia, Belgrade, March 18, 2020\nOn 16.3.20. 17:04, gmail Vladimir\n Koković wrote:\n\n\nHi,\n\n I don't know if this is a bug or the intended mode,\n but since ODBC works and JDBC does not, I would ask why JDBC\n prepared insert does not work if ODBC prepared insert works\n in case some varchar field contains 0x00 and DB is SQL_ASCII?\n\n\nI responded on the github issue, but you cannot simply change the client encoding for the JDBC driver. This is not allowed even though there is a setting for allowClientEncodingChanges this is for specific purposeWhen using the V3 protocol the driver monitors changes in certain server configuration parameters that should not be touched by end users. The client_encoding setting is set by the driver and should not be altered. If the driver detects a change it will abort the connection. There is one legitimate exception to this behaviour though, using the COPY command on a file residing on the server's filesystem. The only means of specifying the encoding of this file is by altering the client_encoding setting. The JDBC team considers this a failing of the COPY command and hopes to provide an alternate means of specifying the encoding in the future, but for now there is this URL parameter. Enable this only if you need to override the client encoding when doing a copy.Dave Cramerwww.postgres.rocks", "msg_date": "Thu, 19 Mar 2020 10:52:31 -0400", "msg_from": "Dave Cramer <davecramer@postgres.rocks>", "msg_from_op": false, "msg_subject": "Re: JDBC prepared insert and X00 and SQL_ASCII" } ]
[ { "msg_contents": "Hello hackers:\n\n\nOur standby node got fatal after the crash recovery. The fatal error was caused in slru module, i changed log level from ERROR to PANIC and got the following stack.\n\n\n(gdb) bt\n#0 0x00007f0cc47a1277 in raise () from /lib64/libc.so.6\n#1 0x00007f0cc47a2968 in abort () from /lib64/libc.so.6\n#2 0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616\n#3 0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175\n#4 0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610\n#5 0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680\n#6 0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661\n#7 0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79\n#8 TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129\n#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672\n#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656\n#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707\n#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048\n#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825\n#14 0x00000000008185be in StartupProcessMain () at startup.c:226\n#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448\n#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804\n#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461\n#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232\n(gdb) p /x *record\n$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,\n main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,\n hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,\n has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,\n relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {\n in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,\n has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,\n bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,\n latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}\n(gdb) p /x minRecoveryPoint\n$11 = 0xe41bbbfd8\n(gdb) p reachedConsistency\n$12 = true\n(gdb) p standbyState\n$13 = STANDBY_SNAPSHOT_READY\n(gdb) p ArchiveRecoveryRequested\n$14 = true\n\n\nAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.\nFrom the crash stack we see that it was reading clog to check xid= 63800060 status.\nBut in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.\n\n\nrmgr: Heap len (rec/tot): 79/ 79, tx: 63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417\nrmgr: Heap len (rec/tot): 197/ 197, tx: 63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320\nrmgr: Heap len (rec/tot): 54/ 54, tx: 63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847\nrmgr: Heap len (rec/tot): 253/ 253, tx: 63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CST\nrmgr: Heap2 len (rec/tot): 90/ 90, tx: 0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Heap2 len (rec/tot): 96/ 96, tx: 0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320\n\n\nIs it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68 dirty page was flushed to the storage but clog was not flushed and then crashed?\n\n\nBRS\nRay\nHello hackers:Our standby node got fatal after the crash recovery. The fatal error was caused in slru module,  i changed log level from ERROR to PANIC and got the following stack.(gdb) bt#0  0x00007f0cc47a1277 in raise () from /lib64/libc.so.6#1  0x00007f0cc47a2968 in abort () from /lib64/libc.so.6#2  0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616#3  0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175#4  0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610#5  0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680#6  0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661#7  0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79#8  TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825#14 0x00000000008185be in StartupProcessMain () at startup.c:226#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232(gdb) p /x *record$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,  main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,      hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,      has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,        relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {      in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,      has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,      bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,  latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}(gdb) p /x minRecoveryPoint$11 = 0xe41bbbfd8(gdb) p reachedConsistency$12 = true(gdb) p standbyState$13 = STANDBY_SNAPSHOT_READY(gdb) p ArchiveRecoveryRequested$14 = trueAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.From the crash stack we see that it was reading clog to check xid= 63800060 status.But in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.rmgr: Heap        len (rec/tot):     79/    79, tx:   63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417rmgr: Heap        len (rec/tot):    197/   197, tx:   63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320rmgr: Heap        len (rec/tot):     54/    54, tx:   63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847rmgr: Heap        len (rec/tot):    253/   253, tx:   63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583rmgr: Transaction len (rec/tot):     34/    34, tx:   63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CSTrmgr: Heap2       len (rec/tot):     90/    90, tx:          0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Heap2       len (rec/tot):     96/    96, tx:          0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320Is it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68  dirty page was flushed to the storage but clog was not flushed and then crashed?BRSRay", "msg_date": "Tue, 17 Mar 2020 00:33:36 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": true, "msg_subject": "Standby got fatal after the crash recovery" }, { "msg_contents": "Appreciate any suggestion for this issue?\nIs there something i misunderstand?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2020-03-17 00:33:36, \"Thunder\" <thunder1@126.com> wrote:\n\nHello hackers:\n\n\nOur standby node got fatal after the crash recovery. The fatal error was caused in slru module, i changed log level from ERROR to PANIC and got the following stack.\n\n\n(gdb) bt\n#0 0x00007f0cc47a1277 in raise () from /lib64/libc.so.6\n#1 0x00007f0cc47a2968 in abort () from /lib64/libc.so.6\n#2 0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616\n#3 0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175\n#4 0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610\n#5 0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680\n#6 0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661\n#7 0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79\n#8 TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129\n#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672\n#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656\n#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707\n#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048\n#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825\n#14 0x00000000008185be in StartupProcessMain () at startup.c:226\n#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448\n#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804\n#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461\n#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232\n(gdb) p /x *record\n$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,\n main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,\n hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,\n has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,\n relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {\n in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,\n has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,\n bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,\n latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}\n(gdb) p /x minRecoveryPoint\n$11 = 0xe41bbbfd8\n(gdb) p reachedConsistency\n$12 = true\n(gdb) p standbyState\n$13 = STANDBY_SNAPSHOT_READY\n(gdb) p ArchiveRecoveryRequested\n$14 = true\n\n\nAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.\nFrom the crash stack we see that it was reading clog to check xid= 63800060 status.\nBut in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.\n\n\nrmgr: Heap len (rec/tot): 79/ 79, tx: 63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417\nrmgr: Heap len (rec/tot): 197/ 197, tx: 63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320\nrmgr: Heap len (rec/tot): 54/ 54, tx: 63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847\nrmgr: Heap len (rec/tot): 253/ 253, tx: 63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CST\nrmgr: Heap2 len (rec/tot): 90/ 90, tx: 0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Heap2 len (rec/tot): 96/ 96, tx: 0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320\n\n\nIs it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68 dirty page was flushed to the storage but clog was not flushed and then crashed?\n\n\nBRS\nRay\n\n\n\n\n \nAppreciate any suggestion for this issue?Is there something i misunderstand?At 2020-03-17 00:33:36, \"Thunder\" <thunder1@126.com> wrote:Hello hackers:Our standby node got fatal after the crash recovery. The fatal error was caused in slru module,  i changed log level from ERROR to PANIC and got the following stack.(gdb) bt#0  0x00007f0cc47a1277 in raise () from /lib64/libc.so.6#1  0x00007f0cc47a2968 in abort () from /lib64/libc.so.6#2  0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616#3  0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175#4  0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610#5  0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680#6  0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661#7  0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79#8  TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825#14 0x00000000008185be in StartupProcessMain () at startup.c:226#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232(gdb) p /x *record$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,  main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,      hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,      has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,        relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {      in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,      has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,      bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,  latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}(gdb) p /x minRecoveryPoint$11 = 0xe41bbbfd8(gdb) p reachedConsistency$12 = true(gdb) p standbyState$13 = STANDBY_SNAPSHOT_READY(gdb) p ArchiveRecoveryRequested$14 = trueAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.From the crash stack we see that it was reading clog to check xid= 63800060 status.But in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.rmgr: Heap        len (rec/tot):     79/    79, tx:   63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417rmgr: Heap        len (rec/tot):    197/   197, tx:   63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320rmgr: Heap        len (rec/tot):     54/    54, tx:   63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847rmgr: Heap        len (rec/tot):    253/   253, tx:   63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583rmgr: Transaction len (rec/tot):     34/    34, tx:   63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CSTrmgr: Heap2       len (rec/tot):     90/    90, tx:          0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Heap2       len (rec/tot):     96/    96, tx:          0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320Is it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68  dirty page was flushed to the storage but clog was not flushed and then crashed?BRSRay", "msg_date": "Tue, 17 Mar 2020 10:36:03 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Standby got fatal after the crash recovery" }, { "msg_contents": "The slru error detail is the following.\nDETAIL: Could not read from file \"/data/pg_xact/003C\" at offset 221184: Success.\n\n\nI think read /data/pg_xact/003C from offset 221184 and the return value was 0.\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2020-03-17 10:36:03, \"Thunder\" <thunder1@126.com> wrote:\n\nAppreciate any suggestion for this issue?\nIs there something i misunderstand?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2020-03-17 00:33:36, \"Thunder\" <thunder1@126.com> wrote:\n\nHello hackers:\n\n\nOur standby node got fatal after the crash recovery. The fatal error was caused in slru module, i changed log level from ERROR to PANIC and got the following stack.\n\n\n(gdb) bt\n#0 0x00007f0cc47a1277 in raise () from /lib64/libc.so.6\n#1 0x00007f0cc47a2968 in abort () from /lib64/libc.so.6\n#2 0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616\n#3 0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175\n#4 0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610\n#5 0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680\n#6 0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661\n#7 0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79\n#8 TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129\n#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672\n#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656\n#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707\n#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048\n#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825\n#14 0x00000000008185be in StartupProcessMain () at startup.c:226\n#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448\n#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804\n#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461\n#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232\n(gdb) p /x *record\n$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,\n main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,\n hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,\n has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,\n relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {\n in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,\n has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,\n bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,\n latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}\n(gdb) p /x minRecoveryPoint\n$11 = 0xe41bbbfd8\n(gdb) p reachedConsistency\n$12 = true\n(gdb) p standbyState\n$13 = STANDBY_SNAPSHOT_READY\n(gdb) p ArchiveRecoveryRequested\n$14 = true\n\n\nAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.\nFrom the crash stack we see that it was reading clog to check xid= 63800060 status.\nBut in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.\n\n\nrmgr: Heap len (rec/tot): 79/ 79, tx: 63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417\nrmgr: Heap len (rec/tot): 197/ 197, tx: 63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320\nrmgr: Heap len (rec/tot): 54/ 54, tx: 63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847\nrmgr: Heap len (rec/tot): 253/ 253, tx: 63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377\nrmgr: Btree len (rec/tot): 64/ 64, tx: 63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CST\nrmgr: Heap2 len (rec/tot): 90/ 90, tx: 0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313\nrmgr: Heap2 len (rec/tot): 96/ 96, tx: 0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320\n\n\nIs it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68 dirty page was flushed to the storage but clog was not flushed and then crashed?\n\n\nBRS\nRay\n\n\n\n\n \n\n\n\n\n\n \nThe slru error detail is the following.DETAIL:  Could not read from file \"/data/pg_xact/003C\" at offset 221184: Success.I think read /data/pg_xact/003C from offset 221184 and the return value was 0.At 2020-03-17 10:36:03, \"Thunder\" <thunder1@126.com> wrote:Appreciate any suggestion for this issue?Is there something i misunderstand?At 2020-03-17 00:33:36, \"Thunder\" <thunder1@126.com> wrote:Hello hackers:Our standby node got fatal after the crash recovery. The fatal error was caused in slru module,  i changed log level from ERROR to PANIC and got the following stack.(gdb) bt#0  0x00007f0cc47a1277 in raise () from /lib64/libc.so.6#1  0x00007f0cc47a2968 in abort () from /lib64/libc.so.6#2  0x0000000000a48347 in errfinish (dummy=dummy@entry=0) at elog.c:616#3  0x00000000005315dd in SlruReportIOError (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, xid=xid@entry=63800060) at slru.c:1175#4  0x0000000000533152 in SimpleLruReadPage (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=1947, write_ok=write_ok@entry=true, xid=xid@entry=63800060) at slru.c:610#5  0x0000000000533350 in SimpleLruReadPage_ReadOnly (ctl=ctl@entry=0xfbad00 <ClogCtlData>, pageno=pageno@entry=1947, xid=xid@entry=63800060) at slru.c:680#6  0x00000000005293fd in TransactionIdGetStatus (xid=xid@entry=63800060, lsn=lsn@entry=0x7ffd17fc5130) at clog.c:661#7  0x000000000053574a in TransactionLogFetch (transactionId=63800060) at transam.c:79#8  TransactionIdDidCommit (transactionId=transactionId@entry=63800060) at transam.c:129#9 0x00000000004f1295 in HeapTupleHeaderAdvanceLatestRemovedXid (tuple=0x2aab27e936e0, latestRemovedXid=latestRemovedXid@entry=0x7ffd17fc51b0) at heapam.c:7672#10 0x00000000005103e0 in btree_xlog_delete_get_latestRemovedXid (record=record@entry=0x4636c98) at nbtxlog.c:656#11 0x0000000000510a19 in btree_xlog_delete (record=0x4636c98) at nbtxlog.c:707#12 btree_redo (record=0x4636c98) at nbtxlog.c:1048#13 0x00000000005544a1 in StartupXLOG () at xlog.c:7825#14 0x00000000008185be in StartupProcessMain () at startup.c:226#15 0x000000000058de15 in AuxiliaryProcessMain (argc=argc@entry=2, argv=argv@entry=0x7ffd17fc9430) at bootstrap.c:448#16 0x0000000000813fe4 in StartChildProcess (type=StartupProcess) at postmaster.c:5804#17 0x0000000000817eb0 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x45ed6e0) at postmaster.c:1461#18 0x00000000004991f4 in main (argc=3, argv=0x45ed6e0) at main.c:232(gdb) p /x *record$10 = {wal_segment_size = 0x40000000, read_page = 0x54f920, system_identifier = 0x5e6e6ea4af938064, private_data = 0x7ffd17fc5390, ReadRecPtr = 0xe41e8fb28, EndRecPtr = 0xe41e8fbb0, decoded_record = 0x4634390, main_data = 0x4634c88, main_data_len = 0x54,  main_data_bufsz = 0x1000, record_origin = 0x0, blocks = {{in_use = 0x1, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e81}, forknum = 0x0, blkno = 0x55f, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x4632bc1, hole_offset = 0x40,      hole_length = 0x1fb0, bimg_len = 0x50, bimg_info = 0x5, has_data = 0x0, data = 0x4656938, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e2b}, forknum = 0x0, blkno = 0x3b5f, flags = 0x80,      has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x46468e8, data_len = 0x0, data_bufsz = 0x2000}, {in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968,        relNode = 0x7e5c370}, forknum = 0x0, blkno = 0x2c3, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {      in_use = 0x0, rnode = {spcNode = 0x67f, dbNode = 0x7517968, relNode = 0x7517e77}, forknum = 0x0, blkno = 0xa8a, flags = 0x80, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0, bimg_len = 0x0, bimg_info = 0x0,      has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0}, {in_use = 0x0, rnode = {spcNode = 0x0, dbNode = 0x0, relNode = 0x0}, forknum = 0x0, blkno = 0x0, flags = 0x0, has_image = 0x0, apply_image = 0x0, bkp_image = 0x0, hole_offset = 0x0, hole_length = 0x0,      bimg_len = 0x0, bimg_info = 0x0, has_data = 0x0, data = 0x0, data_len = 0x0, data_bufsz = 0x0} <repeats 29 times>}, max_block_id = 0x0, readBuf = 0x4632868, readLen = 0x2000, readSegNo = 0x39, readOff = 0x1e8e000, readPageTLI = 0x1, latestPagePtr = 0xe41e8e000,  latestPageTLI = 0x1, currRecPtr = 0xe41e8fb28, currTLI = 0x0, currTLIValidUntil = 0x0, nextTLI = 0x0, readRecordBuf = 0x4638888, readRecordBufSize = 0xa000, errormsg_buf = 0x4634878, noPayload = 0x0, polar_logindex_meta_size = 0x2e}(gdb) p /x minRecoveryPoint$11 = 0xe41bbbfd8(gdb) p reachedConsistency$12 = true(gdb) p standbyState$13 = STANDBY_SNAPSHOT_READY(gdb) p ArchiveRecoveryRequested$14 = trueAfter the crash the standby redo started from 0xDBE1241D8, and reached consistency at 0xE41BBBFD8 because of previous minRecoveryPoint. It did not repaly all WAL record after the crash.From the crash stack we see that it was reading clog to check xid= 63800060 status.But in wal file we see that xid= 63800060 was first created by xlog record which lsn=0xE42C22D68.rmgr: Heap        len (rec/tot):     79/    79, tx:   63800060, lsn: E/42C22D68, prev E/42C22D40, desc: UPDATE off 45 xmax 63800060 ; new off 56 xmax 0, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DB8, prev E/42C22D68, desc: INSERT_LEAF off 200, blkref #0: rel 1663/122780008/122781297 blk 2803rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22DF8, prev E/42C22DB8, desc: INSERT_LEAF off 333, blkref #0: rel 1663/122780008/122781313 blk 1375rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C22E38, prev E/42C22DF8, desc: INSERT_LEAF off 259, blkref #0: rel 1663/122780008/132582066 blk 1417rmgr: Heap        len (rec/tot):    197/   197, tx:   63800060, lsn: E/42C23628, prev E/42C23600, desc: HOT_UPDATE off 35 xmax 63800060 ; new off 55 xmax 0, blkref #0: rel 1663/122780008/122781222 blk 14320rmgr: Heap        len (rec/tot):     54/    54, tx:   63800060, lsn: E/42C23CF0, prev E/42C23CB0, desc: DELETE off 2 KEYS_UPDATED , blkref #0: rel 1663/122780008/122781230 blk 14847rmgr: Heap        len (rec/tot):    253/   253, tx:   63800060, lsn: E/42C260E8, prev E/42C260A8, desc: INSERT off 11, blkref #0: rel 1663/122780008/122781230 blk 30300rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26308, prev E/42C262C8, desc: INSERT_LEAF off 362, blkref #0: rel 1663/122780008/122781290 blk 2925rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C266F0, prev E/42C266B0, desc: INSERT_LEAF off 369, blkref #0: rel 1663/122780008/122781315 blk 1377rmgr: Btree       len (rec/tot):     64/    64, tx:   63800060, lsn: E/42C26AE8, prev E/42C26AC0, desc: INSERT_LEAF off 308, blkref #0: rel 1663/122780008/132498288 blk 583rmgr: Transaction len (rec/tot):     34/    34, tx:   63800060, lsn: E/42C271A8, prev E/42C27168, desc: COMMIT 2020-03-16 09:56:21.540818 CSTrmgr: Heap2       len (rec/tot):     90/    90, tx:          0, lsn: E/4351D3A0, prev E/4351D360, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781225 blk 14313rmgr: Heap2       len (rec/tot):     96/    96, tx:          0, lsn: E/4381C898, prev E/4381C860, desc: CLEAN remxid 63800060, blkref #0: rel 1663/122780008/122781222 blk 14320Is it caused by inconsistency clog and data page ,like from minRecoveryPoint 0xE41BBBFD8 to 0xE42C22D68  dirty page was flushed to the storage but clog was not flushed and then crashed?BRSRay", "msg_date": "Tue, 17 Mar 2020 11:02:24 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re:Standby got fatal after the crash recovery" }, { "msg_contents": "On Tue, Mar 17, 2020 at 11:02:24AM +0800, Thunder wrote:\n> The slru error detail is the following.\n> DETAIL: Could not read from file \"/data/pg_xact/003C\" at offset 221184: Success.\n> \n> I think read /data/pg_xact/003C from offset 221184 and the return value was 0.\n\nWhat is the version of PostgreSQL you are using here? You have not\nmentioned this information in any of the three emails you sent on this\nthread.\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 12:43:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Re:Standby got fatal after the crash recovery" }, { "msg_contents": "Sorry.\nWe are using pg11, and cloned from tag REL_11_BETA2.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAt 2020-03-17 11:43:51, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>On Tue, Mar 17, 2020 at 11:02:24AM +0800, Thunder wrote:\n>> The slru error detail is the following.\n>> DETAIL: Could not read from file \"/data/pg_xact/003C\" at offset 221184: Success.\n>> \n>> I think read /data/pg_xact/003C from offset 221184 and the return value was 0.\n>\n>What is the version of PostgreSQL you are using here? You have not\n>mentioned this information in any of the three emails you sent on this\n>thread.\n>--\n>Michael\n\nSorry.We are using pg11, and cloned from tag REL_11_BETA2.At 2020-03-17 11:43:51, \"Michael Paquier\" <michael@paquier.xyz> wrote:\n>On Tue, Mar 17, 2020 at 11:02:24AM +0800, Thunder wrote:\n>> The slru error detail is the following.\n>> DETAIL: Could not read from file \"/data/pg_xact/003C\" at offset 221184: Success.\n>> \n>> I think read /data/pg_xact/003C from offset 221184 and the return value was 0.\n>\n>What is the version of PostgreSQL you are using here? You have not\n>mentioned this information in any of the three emails you sent on this\n>thread.\n>--\n>Michael", "msg_date": "Tue, 17 Mar 2020 11:53:42 +0800 (CST)", "msg_from": "Thunder <thunder1@126.com>", "msg_from_op": false, "msg_subject": "Re:Re: Re:Standby got fatal after the crash recovery" }, { "msg_contents": "On 2020/03/17 12:53, Thunder wrote:\n> Sorry.\n> We are using pg11, and cloned from tag REL_11_BETA2.\n\nIn that case you should upgrade to the current version\nin the PostgreSQL 11 series (at the time of writing 11.7).\n\n\nRegards\n\nIan Barwick\n\n-- \nIan Barwick https://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 11:19:22 +0900", "msg_from": "Ian Barwick <ian.barwick@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Standby got fatal after the crash recovery" }, { "msg_contents": "On Wed, Mar 18, 2020 at 11:19:22AM +0900, Ian Barwick wrote:\n> On 2020/03/17 12:53, Thunder wrote:\n> > Sorry.\n> > We are using pg11, and cloned from tag REL_11_BETA2.\n> \n> In that case you should upgrade to the current version\n> in the PostgreSQL 11 series (at the time of writing 11.7).\n\nDefinitely. The closest things I can see in this area are 9a1bd8 and\nc34f80 which had symptoms similar to what you have mentioned here with\nbtree_xlog_delete(), but that's past history.\n--\nMichael", "msg_date": "Wed, 18 Mar 2020 11:31:53 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Standby got fatal after the crash recovery" } ]
[ { "msg_contents": "AllocSet allocates memory for itself in blocks, which double in size up\nto maxBlockSize. So, the current block (the last one malloc'd) may\nrepresent half of the total memory allocated for the context itself.\n\nThe free space at the end of that block hasn't been touched at all, and\ndoesn't represent fragmentation or overhead. That means that the\n\"allocated\" memory can be 2X the memory ever touched in the worst case.\n\nAlthough that's technically correct, the purpose of\nMemoryContextMemAllocated() is to give a \"real\" usage number so we know\nwhen we're out of work_mem and need to spill (in particular, the disk-\nbased HashAgg work, but ideally other operators as well). This \"real\"\nnumber should include fragmentation, freed-and-not-reused chunks, and\nother overhead. But it should not include significant amounts of\nallocated-but-never-touched memory, which says more about economizing\ncalls to malloc than it does about the algorithm's memory usage. \n\nAttached is a patch that makes mem_allocated a method (rather than a\nfield) of MemoryContext, and allows each memory context type to track\nthe memory its own way. They all do the same thing as before\n(increment/decrement a field), but AllocSet also subtracts out the free\nspace in the current block. For Slab and Generation, we could do\nsomething similar, but it's not as much of a problem because there's no\ndoubling of the allocation size.\n\nAlthough I think this still matches the word \"allocation\" in spirit,\nit's not technically correct, so feel free to suggest a new name for\nMemoryContextMemAllocated().\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 16 Mar 2020 11:45:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Mon, 2020-03-16 at 11:45 -0700, Jeff Davis wrote:\n> Attached is a patch that makes mem_allocated a method (rather than a\n> field) of MemoryContext, and allows each memory context type to track\n> the memory its own way. They all do the same thing as before\n> (increment/decrement a field), but AllocSet also subtracts out the\n> free\n> space in the current block. For Slab and Generation, we could do\n> something similar, but it's not as much of a problem because there's\n> no\n> doubling of the allocation size.\n\nCommitted.\n\nIn an off-list discussion, Andres suggested that MemoryContextStats\ncould be refactored to achieve this purpose, perhaps with flags to\navoid walking through the blocks and freelists when those are not\nneeded.\n\nWe discussed a few other names, such as \"space\", \"active memory\", and\n\"touched\". We didn't settle on any great name, but \"touched\" seemed to\nbe the most descriptive.\n\nThis refactoring/renaming can be done later; right now I committed this\nto unblock disk-based Hash Aggregation, which is ready.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 18 Mar 2020 15:41:42 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Mon, Mar 16, 2020 at 2:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> Attached is a patch that makes mem_allocated a method (rather than a\n> field) of MemoryContext, and allows each memory context type to track\n> the memory its own way. They all do the same thing as before\n> (increment/decrement a field), but AllocSet also subtracts out the free\n> space in the current block. For Slab and Generation, we could do\n> something similar, but it's not as much of a problem because there's no\n> doubling of the allocation size.\n>\n> Although I think this still matches the word \"allocation\" in spirit,\n> it's not technically correct, so feel free to suggest a new name for\n> MemoryContextMemAllocated().\n\nProcedurally, I think that it is highly inappropriate to submit a\npatch two weeks after the start of the final CommitFest and then\ncommit it just over 48 hours later without a single endorsement of the\nchange from anyone.\n\nSubstantively, I think that whether or not this is improvement depends\nconsiderably on how your OS handles overcommit. I do not have enough\nknowledge to know whether it will be better in general, but would\nwelcome opinions from others.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:44:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, Mar 19, 2020 at 11:44:05AM -0400, Robert Haas wrote:\n>On Mon, Mar 16, 2020 at 2:45 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>> Attached is a patch that makes mem_allocated a method (rather than a\n>> field) of MemoryContext, and allows each memory context type to track\n>> the memory its own way. They all do the same thing as before\n>> (increment/decrement a field), but AllocSet also subtracts out the free\n>> space in the current block. For Slab and Generation, we could do\n>> something similar, but it's not as much of a problem because there's no\n>> doubling of the allocation size.\n>>\n>> Although I think this still matches the word \"allocation\" in spirit,\n>> it's not technically correct, so feel free to suggest a new name for\n>> MemoryContextMemAllocated().\n>\n>Procedurally, I think that it is highly inappropriate to submit a\n>patch two weeks after the start of the final CommitFest and then\n>commit it just over 48 hours later without a single endorsement of the\n>change from anyone.\n>\n\nTrue.\n\n>Substantively, I think that whether or not this is improvement depends\n>considerably on how your OS handles overcommit. I do not have enough\n>knowledge to know whether it will be better in general, but would\n>welcome opinions from others.\n>\n\nNot sure overcommit is a major factor, and if it is then maybe it's the\nstrategy of doubling block size that's causing problems.\n\nAFAICS the 2x allocation is the worst case, because it only happens\nright after allocating a new block (of twice the size), when the\n\"utilization\" drops from 100% to 50%. But in practice the utilization\nwill be somewhere in between, with an average of 75%. And we're not\ndoubling the block size indefinitely - there's an upper limit, so over\ntime the utilization drops less and less. So as the contexts grow, the\ndiscrepancy disappears. And I'd argue the smaller the context, the less\nof an issue the overcommit behavior is.\n\nMy understanding is that this is really just an accounting issue, where\nallocating a block would get us over the limit, which I suppose might be\nan issue with low work_mem values.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:11:31 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, 2020-03-19 at 19:11 +0100, Tomas Vondra wrote:\n> AFAICS the 2x allocation is the worst case, because it only happens\n> right after allocating a new block (of twice the size), when the\n> \"utilization\" drops from 100% to 50%. But in practice the utilization\n> will be somewhere in between, with an average of 75%.\n\nSort of. Hash Agg is constantly watching the memory, so it will\ntypically spill right at the point where the accounting for that memory\ncontext is off by 2X. \n\nThat's mitigated because the hash table itself (the array of\nTupleHashEntryData) ends up allocated as its own block, so does not\nhave any waste. The total (table mem + out of line) might be close to\nright if the hash table array itself is a large fraction of the data,\nbut I don't think that's what we want.\n\n> And we're not\n> doubling the block size indefinitely - there's an upper limit, so\n> over\n> time the utilization drops less and less. So as the contexts grow,\n> the\n> discrepancy disappears. And I'd argue the smaller the context, the\n> less\n> of an issue the overcommit behavior is.\n\nThe problem is that the default work_mem is 4MB, and the doubling\nbehavior goes to 8MB, so it's a problem with default settings.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:25:16 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, Mar 19, 2020 at 2:11 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> My understanding is that this is really just an accounting issue, where\n> allocating a block would get us over the limit, which I suppose might be\n> an issue with low work_mem values.\n\nWell, the issue is, if I understand correctly, that this means that\nMemoryContextStats() might now report a smaller amount of memory than\nwhat we actually allocated from the operating system. That seems like\nit might lead someone trying to figure out where a backend is leaking\nmemory to erroneous conclusions.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:26:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "\nOn Thu, 2020-03-19 at 11:44 -0400, Robert Haas wrote:\n> Procedurally, I think that it is highly inappropriate to submit a\n> patch two weeks after the start of the final CommitFest and then\n> commit it just over 48 hours later without a single endorsement of\n> the\n> change from anyone.\n\nReverted.\n\nSorry, I misjudged this as a \"supporting fix for a specific problem\",\nbut it seems others feel it requires discussion.\n\n> Substantively, I think that whether or not this is improvement\n> depends\n> considerably on how your OS handles overcommit. I do not have enough\n> knowledge to know whether it will be better in general, but would\n> welcome opinions from others.\n\nI think omitting the tail of the current block is an unqualified\nimprovement for the purpose of obeying work_mem, regardless of the OS.\nThe block sizes keep doubling up to 8MB, and it doesn't make a lot of\nsense to count that last large mostly-empty block against work_mem.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:27:50 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, Mar 19, 2020 at 3:27 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> I think omitting the tail of the current block is an unqualified\n> improvement for the purpose of obeying work_mem, regardless of the OS.\n> The block sizes keep doubling up to 8MB, and it doesn't make a lot of\n> sense to count that last large mostly-empty block against work_mem.\n\nWell, again, my point is that whether or not it counts depends on your\nsystem's overcommit behavior. Depending on how you have the\nconfigured, or what your OS likes to do, it may in reality count or\nnot count. Or so I believe. Am I wrong?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:33:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, Mar 19, 2020 at 12:25:16PM -0700, Jeff Davis wrote:\n>On Thu, 2020-03-19 at 19:11 +0100, Tomas Vondra wrote:\n>> AFAICS the 2x allocation is the worst case, because it only happens\n>> right after allocating a new block (of twice the size), when the\n>> \"utilization\" drops from 100% to 50%. But in practice the utilization\n>> will be somewhere in between, with an average of 75%.\n>\n>Sort of. Hash Agg is constantly watching the memory, so it will\n>typically spill right at the point where the accounting for that memory\n>context is off by 2X.\n>\n>That's mitigated because the hash table itself (the array of\n>TupleHashEntryData) ends up allocated as its own block, so does not\n>have any waste. The total (table mem + out of line) might be close to\n>right if the hash table array itself is a large fraction of the data,\n>but I don't think that's what we want.\n>\n>> And we're not\n>> doubling the block size indefinitely - there's an upper limit, so\n>> over\n>> time the utilization drops less and less. So as the contexts grow,\n>> the\n>> discrepancy disappears. And I'd argue the smaller the context, the\n>> less\n>> of an issue the overcommit behavior is.\n>\n>The problem is that the default work_mem is 4MB, and the doubling\n>behavior goes to 8MB, so it's a problem with default settings.\n>\n\nYes, it's an issue for the accuracy of our accounting. What Robert was\ntalking about is overcommit behavior at the OS level, which I'm arguing\nis unlikely to be an issue, because for low work_mem values the absolute\ndifference is small, and on large work_mem values it's limited by the\nblock size limit.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 20:37:34 +0100", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, 2020-03-19 at 15:26 -0400, Robert Haas wrote:\n> Well, the issue is, if I understand correctly, that this means that\n> MemoryContextStats() might now report a smaller amount of memory than\n> what we actually allocated from the operating system. That seems like\n> it might lead someone trying to figure out where a backend is leaking\n> memory to erroneous conclusions.\n\nMemoryContextStats() was unaffected by my now-reverted change.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:44:44 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Mon, 2020-03-16 at 11:45 -0700, Jeff Davis wrote:\n> Although that's technically correct, the purpose of\n> MemoryContextMemAllocated() is to give a \"real\" usage number so we\n> know\n> when we're out of work_mem and need to spill (in particular, the\n> disk-\n> based HashAgg work, but ideally other operators as well). This \"real\"\n> number should include fragmentation, freed-and-not-reused chunks, and\n> other overhead. But it should not include significant amounts of\n> allocated-but-never-touched memory, which says more about economizing\n> calls to malloc than it does about the algorithm's memory usage. \n\nTo expand on this point:\n\nwork_mem is to keep executor algorithms somewhat constrained in the\nmemory that they use. With that in mind, it should reflect things that\nthe algorithm has some control over, and can be measured cheaply.\n\nTherefore, we shouldn't include huge nearly-empty blocks of memory that\nthe system decided to allocate in response to a request for a small\nchunk (the algorithm has little control). Nor should we try to walk a\nlist of blocks or free chunks (expensive).\n\nWe should include used memory, reasonable overhead (chunk headers,\nalignment, etc.), and probably free chunks (which represent\nfragmentation).\n\nFor AllocSet, the notion of \"all touched memory\", which is everything\nexcept the current block's tail, seems to fit the requirements well.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:56:59 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, Mar 19, 2020 at 3:44 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> On Thu, 2020-03-19 at 15:26 -0400, Robert Haas wrote:\n> > Well, the issue is, if I understand correctly, that this means that\n> > MemoryContextStats() might now report a smaller amount of memory than\n> > what we actually allocated from the operating system. That seems like\n> > it might lead someone trying to figure out where a backend is leaking\n> > memory to erroneous conclusions.\n>\n> MemoryContextStats() was unaffected by my now-reverted change.\n\nOh. Well, that addresses my concern, then. If this only affects the\naccounting for memory-bounded hash aggregation and nothing else is\ngoing to be touched, including MemoryContextStats(), then it's not an\nissue for me.\n\nOther people may have different concerns, but that was the only thing\nthat was bothering me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 16:04:46 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, 2020-03-19 at 15:33 -0400, Robert Haas wrote:\n> On Thu, Mar 19, 2020 at 3:27 PM Jeff Davis <pgsql@j-davis.com> wrote:\n> > I think omitting the tail of the current block is an unqualified\n> > improvement for the purpose of obeying work_mem, regardless of the\n> > OS.\n> > The block sizes keep doubling up to 8MB, and it doesn't make a lot\n> > of\n> > sense to count that last large mostly-empty block against work_mem.\n> \n> Well, again, my point is that whether or not it counts depends on\n> your\n> system's overcommit behavior. Depending on how you have the\n> configured, or what your OS likes to do, it may in reality count or\n> not count. Or so I believe. Am I wrong?\n\nI don't believe it should not be counted for the purposes of work_mem.\n\nLet's say that the OS eagerly allocates it, then what is the algorithm\nsupposed to do in response? It can either:\n\n1. just accept that all of the space is used, even though it's\npotentially as low as 50% used, and spill earlier than may be\nnecessary; or\n\n2. find a way to measure the free space, and somehow predict whether\nthat space will be reused the next time a group is added to the hash\ntable\n\nIt just doesn't seem reasonable for me for the algorithm to change its\nbehavior based on these large block allocations. It may be valuable\ninformation for other purposes (like tuning your OS, or tracking down\nmemory issues), though.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 13:09:16 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Thu, 2020-03-19 at 16:04 -0400, Robert Haas wrote:\n> Other people may have different concerns, but that was the only thing\n> that was bothering me.\n\nOK, thank you for raising it.\n\nPerhaps we can re-fix the issue for HashAgg if necessary, or I can\ntweak some accounting things within HashAgg itself, or both.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 13:16:26 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Wed, 2020-03-18 at 15:41 -0700, Jeff Davis wrote:\n> In an off-list discussion, Andres suggested that MemoryContextStats\n> could be refactored to achieve this purpose, perhaps with flags to\n> avoid walking through the blocks and freelists when those are not\n> needed.\n\nAttached refactoring patch. There's enough in here that warrants\ndiscussion that I don't think this makes sense for v13 and I'm adding\nit to the July commitfest.\n\nI still think we should do something for v13, such as the originally-\nproposed patch[1]. It's not critical, but it simply reports a better\nnumber for memory consumption. Currently, the memory usage appears to\njump, often right past work mem (by a reasonable but noticable amount),\nwhich could be confusing.\n\nRegarding the attached patch (target v14):\n\n * there's a new MemoryContextCount() that simply calculates the\n statistics without printing anything, and returns a struct\n - it supports flags to indicate which stats should be\n calculated, so that some callers can avoid walking through\n blocks/freelists\n * it adds a new statistic for \"new space\" (i.e. untouched)\n * it eliminates specialization of the memory context printing\n - the only specialization was for generation.c to output the\n number of chunks, which can be done easily enough for the\n other types, too\n\nRegards,\n\tJeff Davis\n\n\n[1] \nhttps://postgr.es/m/ec63d70b668818255486a83ffadc3aec492c1f57.camel%40j-davis.com", "msg_date": "Fri, 27 Mar 2020 17:21:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "Hi,\n\nOn 2020-03-27 17:21:10 -0700, Jeff Davis wrote:\n> Attached refactoring patch. There's enough in here that warrants\n> discussion that I don't think this makes sense for v13 and I'm adding\n> it to the July commitfest.\n\nIDK, adding a commit to v13 that we know we should do architecturally\ndifferently in v14, when the difference in complexity between the two\npatches isn't actually *that* big...\n\nI'd like to see others jump in here...\n\n\n> I still think we should do something for v13, such as the originally-\n> proposed patch[1]. It's not critical, but it simply reports a better\n> number for memory consumption. Currently, the memory usage appears to\n> jump, often right past work mem (by a reasonable but noticable amount),\n> which could be confusing.\n\nIs that really a significant issue for most work mem sizes? Shouldn't\nthe way we increase sizes lead to the max difference between the\nmeasurements to be somewhat limited?\n\n\n> * there's a new MemoryContextCount() that simply calculates the\n> statistics without printing anything, and returns a struct\n> - it supports flags to indicate which stats should be\n> calculated, so that some callers can avoid walking through\n> blocks/freelists\n> * it adds a new statistic for \"new space\" (i.e. untouched)\n> * it eliminates specialization of the memory context printing\n> - the only specialization was for generation.c to output the\n> number of chunks, which can be done easily enough for the\n> other types, too\n\nThat sounds like a good direction.\n\n\n\n> +\tif (flags & MCXT_STAT_NBLOCKS)\n> +\t\tcounters.nblocks = nblocks;\n> +\tif (flags & MCXT_STAT_NCHUNKS)\n> +\t\tcounters.nchunks = set->nChunks;\n> +\tif (flags & MCXT_STAT_FREECHUNKS)\n> +\t\tcounters.freechunks = freechunks;\n> +\tif (flags & MCXT_STAT_TOTALSPACE)\n> +\t\tcounters.totalspace = set->memAllocated;\n> +\tif (flags & MCXT_STAT_FREESPACE)\n> +\t\tcounters.freespace = freespace;\n> +\tif (flags & MCXT_STAT_NEWSPACE)\n> +\t\tcounters.newspace = set->blocks->endptr - set->blocks->freeptr;\n\nI'd spec it so that context implementations are allowed to\nunconditionally fill fields, even when the flag isn't specified. The\nbranches quoted don't buy us anyting...\n\n\n\n> diff --git a/src/include/nodes/memnodes.h b/src/include/nodes/memnodes.h\n> index c9f2bbcb367..cc545852968 100644\n> --- a/src/include/nodes/memnodes.h\n> +++ b/src/include/nodes/memnodes.h\n> @@ -29,11 +29,21 @@\n> typedef struct MemoryContextCounters\n> {\n> \tSize\t\tnblocks;\t\t/* Total number of malloc blocks */\n> +\tSize\t\tnchunks;\t\t/* Total number of chunks (used+free) */\n> \tSize\t\tfreechunks;\t\t/* Total number of free chunks */\n> \tSize\t\ttotalspace;\t\t/* Total bytes requested from malloc */\n> \tSize\t\tfreespace;\t\t/* The unused portion of totalspace */\n> +\tSize\t\tnewspace;\t\t/* Allocated but never held any chunks */\n\nI'd add some reasoning as to why this is useful.\n\n\n> } MemoryContextCounters;\n> \n> +#define MCXT_STAT_NBLOCKS\t\t(1 << 0)\n> +#define MCXT_STAT_NCHUNKS\t\t(1 << 1)\n> +#define MCXT_STAT_FREECHUNKS\t(1 << 2)\n> +#define MCXT_STAT_TOTALSPACE\t(1 << 3)\n> +#define MCXT_STAT_FREESPACE\t\t(1 << 4)\n> +#define MCXT_STAT_NEWSPACE\t\t(1 << 5)\n\ns/MCXT_STAT/MCXT_STAT_NEED/?\n\n\n> +#define MCXT_STAT_ALL\t\t\t((1 << 6) - 1)\n\nHm, why not go for ~0 or such?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 16:48:25 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Sat, 28 Mar 2020 at 13:21, Jeff Davis <pgsql@j-davis.com> wrote:\n> Attached refactoring patch. There's enough in here that warrants\n> discussion that I don't think this makes sense for v13 and I'm adding\n> it to the July commitfest.\n\nI had a read over this too. I noted down the following during my pass of it.\n\n1. The comment mentions \"passthru\", but you've removed that parameter.\n\n * For now, the passthru pointer just points to \"int level\"; later we might\n * make that more complicated.\n */\n static void\n-MemoryContextStatsPrint(MemoryContext context, void *passthru,\n+MemoryContextStatsPrint(MemoryContext context, int level,\n const char *stats_string)\n\n2. I don't think MemoryContextCount is the best name for this\nfunction. When I saw:\n\ncounters = MemoryContextCount(aggstate->hash_metacxt, flags, true);\n\ni assumed it was returning some integer number, that is until I looked\nat the \"counters\" datatype.\n\nMaybe it would be better to name the function\nMemoryContextGetTelemetry and the struct MemoryContextTelemetry rather\nthan MemoryContextCounters? Or maybe MemoryContextTally and call the\nfunction either MemoryContextGetTelemetry() or\nMemoryContextGetTally(). Or perhaps MemoryContextGetAccounting() and\nthe struct MemoryContextAccounting\n\n3. I feel like it would be nicer if you didn't change the \"count\"\nmethods to return a MemoryContextCounters. It means you may need to\nzero a struct for each level, assign the values, then add them to the\ntotal. If you were just to zero the struct in MemoryContextCount()\nthen pass it into the count function, then you could just have it do\nall the += work. It would reduce the code in MemoryContextCount() too.\n\n4. Do you think it would be better to have two separate functions for\nMemoryContextCount(), a recursive version and a non-recursive version.\nI feel that the function should be so simple that it does not make a\ngreat deal of sense to load it up to handle both cases. Looking\naround mcxt.c, I see MemoryContextResetOnly() and\nMemoryContextResetChildren(), while that's not a perfect example, it\ndoes seem like a better lead to follow.\n\n5. For performance testing, I tried using the following table with 1MB\nwork_mem then again with 1GB work_mem. I wondered if making the\naccounting more complex would cause a slowdown in nodeAgg.c, as I see\nwe're calling this function each time we get a tuple that belongs in a\nnew group. The 1MB test is likely not such a great way to measure this\nsince we do spill to disk in that case and the changing in accounting\nmeans we do that at a slightly different time, but the 1GB test does\nnot spill and it's a bit slower.\n\ncreate table t (a int);\ninsert into t select x from generate_Series(1,10000000) x;\nanalyze t;\n\n-- Unpatched\n\nset work_mem = '1MB';\nexplain analyze select a from t group by a; -- Ran 3 times.\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=481750.10..659875.95 rows=10000048 width=4)\n(actual time=3350.190..8193.400 rows=10000000 loops=1)\n Group Key: a\n Planned Partitions: 32\n Peak Memory Usage: 1177 kB\n Disk Usage: 234920 kB\n HashAgg Batches: 1188\n -> Seq Scan on t (cost=0.00..144248.48 rows=10000048 width=4)\n(actual time=0.013..1013.755 rows=10000000 loops=1)\n Planning Time: 0.131 ms\n Execution Time: 8586.420 ms\n Execution Time: 8446.961 ms\n Execution Time: 8449.492 ms\n(9 rows)\n\n-- Patched\n\nset work_mem = '1MB';\nexplain analyze select a from t group by a;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=481748.00..659873.00 rows=10000000 width=4)\n(actual time=3470.107..8598.836 rows=10000000 loops=1)\n Group Key: a\n Planned Partitions: 32\n Peak Memory Usage: 1033 kB\n Disk Usage: 234816 kB\n HashAgg Batches: 1056\n -> Seq Scan on t (cost=0.00..144248.00 rows=10000000 width=4)\n(actual time=0.017..1091.820 rows=10000000 loops=1)\n Planning Time: 0.285 ms\n Execution Time: 8996.824 ms\n Execution Time: 8781.624 ms\n Execution Time: 8900.324 ms\n(9 rows)\n\n-- Unpatched\n\nset work_mem = '1GB';\nexplain analyze select a from t group by a;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169248.00..269248.00 rows=10000000 width=4)\n(actual time=4537.779..7033.318 rows=10000000 loops=1)\n Group Key: a\n Peak Memory Usage: 868369 kB\n -> Seq Scan on t (cost=0.00..144248.00 rows=10000000 width=4)\n(actual time=0.018..820.136 rows=10000000 loops=1)\n Planning Time: 0.054 ms\n Execution Time: 7561.063 ms\n Execution Time: 7573.985 ms\n Execution Time: 7572.058 ms\n(6 rows)\n\n-- Patched\n\nset work_mem = '1GB';\nexplain analyze select a from t group by a;\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=169248.00..269248.00 rows=10000000 width=4)\n(actual time=4840.045..7359.970 rows=10000000 loops=1)\n Group Key: a\n Peak Memory Usage: 861975 kB\n -> Seq Scan on t (cost=0.00..144248.00 rows=10000000 width=4)\n(actual time=0.018..789.975 rows=10000000 loops=1)\n Planning Time: 0.055 ms\n Execution Time: 7904.069 ms\n Execution Time: 7913.692 ms\n Execution Time: 7927.061 ms\n(6 rows)\n\nPerhaps the slowdown is unrelated. If it, then maybe the reduction in\nbranching mentioned by Andres might help a bit plus maybe a bit from\nwhat I mentioned above about passing in the counter struct instead of\nreturning it at each level.\n\nDavid\n\n\n", "msg_date": "Mon, 6 Apr 2020 23:39:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Mon, 2020-04-06 at 23:39 +1200, David Rowley wrote:\n> 1. The comment mentions \"passthru\", but you've removed that\n> parameter.\n\nFixed, thank you.\n\n> 2. I don't think MemoryContextCount is the best name for this\n> function. When I saw:\n\nI've gone back and forth on naming a bit. The right name, in my\nopinion, is MemoryContextStats(), but that's taken by something that\nshould be called MemoryContextReport(). But I didn't want to change\nthat as it would probably annoy a lot of people who are used to calling\nMemoryContextStats() from gdb.\n\nI changed the new function to be called MemoryContextGetCounters(),\nwhich is more directly what it's doing. \"Telemetry\" makes me think more\nof a stream of information rather than a particular point in time.\n\n> 3. I feel like it would be nicer if you didn't change the \"count\"\n> methods to return a MemoryContextCounters. It means you may need to\n> zero a struct for each level, assign the values, then add them to the\n> total. If you were just to zero the struct in MemoryContextCount()\n> then pass it into the count function, then you could just have it do\n> all the += work. It would reduce the code in MemoryContextCount()\n> too.\n\nI changed it to use a pointer out parameter, but I don't think an\nin/out parameter is quite right there. MemoryContextStats() ends up\nusing both the per-context counters as well as the totals, so it's not\nhelpful to return just the totals.\n\n> 4. Do you think it would be better to have two separate functions for\n> MemoryContextCount(), a recursive version and a non-recursive\n> version.\n\nI could, but right now the only caller passes recurse=true, so I'm not\nreally eliminating any code in that path by specializing the functions.\nAre you thinking about performance or you just think it would be better\nto have two entry points?\n\n> 5. For performance testing, I tried using the following table with\n> 1MB\n> work_mem then again with 1GB work_mem. I wondered if making the\n> accounting more complex would cause a slowdown in nodeAgg.c, as I see\n> we're calling this function each time we get a tuple that belongs in\n> a\n> new group. The 1MB test is likely not such a great way to measure\n> this\n> since we do spill to disk in that case and the changing in accounting\n> means we do that at a slightly different time, but the 1GB test does\n> not spill and it's a bit slower.\n\nI think this was because I was previously returning a struct. By just\npassing a pointer as an out param, it seems to have mitigated it, but\nnot completely eliminated the cost (< 2% in my tests). I was using an\nOFFSET 100000000 instead of EXPLAIN ANALYZE in my test measurements\nbecause it was less noisy, and I focused on the 1GB test for the reason\nyou mention.\n\nI also addressed Andres's comments:\n\n* changed the name of the flags from MCXT_STAT to MCXT_COUNT\n* changed ((1<<6)-1) to ~0\n* removed unnecessary branches from the GetCounters method\n* expanded some comments\n\nRegards,\n\tJeff Davis", "msg_date": "Mon, 06 Apr 2020 19:26:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Sun, 2020-04-05 at 16:48 -0700, Andres Freund wrote:\n> > I still think we should do something for v13, such as the\n> > originally-\n> > proposed patch[1]. It's not critical, but it simply reports a\n> > better\n> > number for memory consumption. Currently, the memory usage appears\n> > to\n> > jump, often right past work mem (by a reasonable but noticable\n> > amount),\n> > which could be confusing.\n> \n> Is that really a significant issue for most work mem sizes? Shouldn't\n> the way we increase sizes lead to the max difference between the\n> measurements to be somewhat limited?\n\nFor work_mem less than 16MB, it's essentially spilling when the table\ncontext is about 75% of what it could be (as bad as 50% and as good as\n100% depending on a number of factors). That's not terrible, but it is\nsignificant.\n\nIt also means that the reported memory peak jumps up rather than going\nup gradually, so it ends up surpassing work_mem (e.g. 4MB of work_mem\nmight show a memory peak of 5MB). So it's a weird combination of under-\nutilizing and over-reporting.\n\n> I'd spec it so that context implementations are allowed to\n> unconditionally fill fields, even when the flag isn't specified. The\n> branches quoted don't buy us anyting...\n\nDone.\n\n> I'd add some reasoning as to why this is useful.\n\nDone.\n\n> s/MCXT_STAT/MCXT_STAT_NEED/?\n\nI changed to MCXT_COUNT_. MCXT_STAT_NEED seemed slightly verbose for\nme.\n\n> > +#define MCXT_STAT_ALL\t\t\t((1 << 6) - 1)\n> \n> Hm, why not go for ~0 or such?\n\nDone.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 06 Apr 2020 19:35:35 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Tue, 7 Apr 2020 at 14:26, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Mon, 2020-04-06 at 23:39 +1200, David Rowley wrote:\n> > 4. Do you think it would be better to have two separate functions for\n> > MemoryContextCount(), a recursive version and a non-recursive\n> > version.\n>\n> I could, but right now the only caller passes recurse=true, so I'm not\n> really eliminating any code in that path by specializing the functions.\n> Are you thinking about performance or you just think it would be better\n> to have two entry points?\n\nI was thinking in terms of both performance by eliminating a branch,\nbut mostly it was down to ease of code reading.\n\nI thought it was easier to read:\nMemoryContextGetCountersRecruse(&counters); then\nMemoryContextGetCounters(&counters, true); since I might need to go\nsee what \"true\" is for.\n\nThe non-recursive version, if we decide we need one, would likely just\nbe 1 one-line body function calling the implementation's getcounter\nmethod.\n\nDavid\n\n\n", "msg_date": "Wed, 8 Apr 2020 00:14:16 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "On Mon, 2020-03-16 at 11:45 -0700, Jeff Davis wrote:\n> AllocSet allocates memory for itself in blocks, which double in size\n> up\n> to maxBlockSize. So, the current block (the last one malloc'd) may\n> represent half of the total memory allocated for the context itself.\n\nNarrower approach that doesn't touch memory context internals:\n\nIf the blocks double up in size to maxBlockSize, why not just create\nthe memory context with a smaller maxBlockSize? I had originally\ndismissed this as a hack that could slow down some workloads when\nwork_mem is large.\n\nBut we can simply make it proportional to work_mem, which makes a lot\nof sense for an operator like HashAgg that controls its memory usage.\nIt can allocate in blocks large enough that we don't call malloc() too\noften when work_mem is large; but small enough that we don't overrun\nwork_mem when work_mem is small.\n\nI have attached a patch to do this only for HashAgg, using a new entry\npoint in execUtils.c called CreateWorkExprContext(). It sets\nmaxBlockSize to 1/16th of work_mem (rounded down to a power of two),\nwith a minimum of initBlockSize.\n\nThis could be a good general solution for other operators as well, but\nthat requires a bit more investigation, so I'll leave that for v14.\n\nThe attached patch is narrow and solves the problem for HashAgg nicely\nwithout interfering with anything else, so I plan to commit it soon for\nv13.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 07 Apr 2020 17:21:00 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" }, { "msg_contents": "> On 8 Apr 2020, at 02:21, Jeff Davis <pgsql@j-davis.com> wrote:\n\n> The attached patch is narrow and solves the problem for HashAgg nicely\n> without interfering with anything else, so I plan to commit it soon for\n> v13.\n\nIf I read this thread correctly, there is nothing outstanding here for 14 from\nthis patch? I've marked the entry committed i the July commitfest, feel to\nfree to change that in case I misunderstood.\n\ncheers ./daniel\n\n", "msg_date": "Wed, 1 Jul 2020 14:34:31 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Make MemoryContextMemAllocated() more precise" } ]
[ { "msg_contents": "I have another refactoring patch for nbtinsert.c that is a little\nbigger than the commits I've pushed recently. I thought I should run\nit by -hackers before proceeding with commit, even though it seems\nlike a very clear improvement to me.\n\nAttached patch creates a _bt_search() wrapper that is local to\nnbtinsert.c -- _bt_search_insert(). This contains all the logic needed\nto handle the \"fastpath\" rightmost leaf page caching optimization\nadded by commit 2b272734, avoiding any direct knowledge of the\noptimization within the high level _bt_doinsert() function. This is\nmore or less how things were before 2b272734. This is certainly a\nreadability win.\n\nIt's also useful to have our leaf page buffer acquired within a\ndedicated function that knows about the BTInsertState struct. That\nmakes the \"ownership\" of the buffer less ambiguous, and provides a\nsingle reference point for other code that sets up the fastpath\noptimization that will now actually be used in _bt_search_insert().\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 16 Mar 2020 14:04:22 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "nbtree: Refactor \"fastpath\" and _bt_search() code" } ]
[ { "msg_contents": "How to install https://github.com/sraoss/pgsql-ivm on postgress running on\nUbuntu and aws postgress\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n", "msg_date": "Mon, 16 Mar 2020 14:51:13 -0700 (MST)", "msg_from": "ankurthakkar <ankurthakkar@hotmail.com>", "msg_from_op": true, "msg_subject": "How to install https://github.com/sraoss/pgsql-ivm on postgress" }, { "msg_contents": "On 3/16/20 5:51 PM, ankurthakkar wrote:\n> How to install https://github.com/sraoss/pgsql-ivm on postgress running on\n> Ubuntu and aws postgress\n\nThat project appears to include its own modified version of\nPostgreSQL, so to use it you would simply build it. It isn't something\nthat can just be installed into an unmodified PostgreSQL version.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 17 Mar 2020 10:17:37 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: How to install https://github.com/sraoss/pgsql-ivm on postgress" } ]
[ { "msg_contents": "Hackers,\n\nThis is pretty low priority stuff, but I noticed there is no test coverage for this in src/test/regress. There is underwhelming coverage elsewhere. I leave it to others to decide if this is worth committing.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 15:59:06 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Adding test coverage for ALTER SEQUENCE .. SET SCHEMA" } ]
[ { "msg_contents": "Hackers,\n\nWhile working on object access hooks, I noticed several locations where I would expect the hook to be invoked, but no actual invocation. I think this just barely qualifies as a bug. It's debatable because whether it is a bug depends on the user's expectations and whether not invoking the hook in these cases is defensible. Does anybody have any recollection of an intentional choice not to invoke in these locations?\n\nPatch attached.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 16 Mar 2020 16:03:51 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Adding missing object access hook invocations" }, { "msg_contents": "On 2020-Mar-16, Mark Dilger wrote:\n\n> Hackers,\n> \n> While working on object access hooks, I noticed several locations where I would expect the hook to be invoked, but no actual invocation. I think this just barely qualifies as a bug. It's debatable because whether it is a bug depends on the user's expectations and whether not invoking the hook in these cases is defensible. Does anybody have any recollection of an intentional choice not to invoke in these locations?\n\nHmm, possibly the create-time calls are missing.\n\nI'm surprised about the InvokeObjectDropHook calls though. Doesn't\ndeleteOneObject already call that? If we have more calls elsewhere,\nmaybe they are redundant. I think we should only have those for\n\"shared\" objects.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 16 Mar 2020 21:14:36 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "\n\n> On Mar 16, 2020, at 5:14 PM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Mar-16, Mark Dilger wrote:\n> \n>> Hackers,\n>> \n>> While working on object access hooks, I noticed several locations where I would expect the hook to be invoked, but no actual invocation. I think this just barely qualifies as a bug. It's debatable because whether it is a bug depends on the user's expectations and whether not invoking the hook in these cases is defensible. Does anybody have any recollection of an intentional choice not to invoke in these locations?\n> \n> Hmm, possibly the create-time calls are missing.\n\nIt looks to me that both the create and alter calls are missing.\n\n> \n> I'm surprised about the InvokeObjectDropHook calls though. Doesn't\n> deleteOneObject already call that? If we have more calls elsewhere,\n> maybe they are redundant. I think we should only have those for\n> \"shared\" objects.\n\nYeah, you are right about the drop hook being invoked elsewhere for dropping ACCESS METHOD and STATISTICS. Sorry for the noise.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 16 Mar 2020 17:58:51 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "Hi,\n\nOn 2020-03-16 16:03:51 -0700, Mark Dilger wrote:\n> While working on object access hooks, I noticed several locations\n> where I would expect the hook to be invoked, but no actual invocation.\n> I think this just barely qualifies as a bug. It's debatable because\n> whether it is a bug depends on the user's expectations and whether not\n> invoking the hook in these cases is defensible. Does anybody have any\n> recollection of an intentional choice not to invoke in these\n> locations?\n\nI am strongly against treating this as a bug, which'd likely imply\nbackpatching. New hook invocations are a noticable behavioural change,\nand very plausibly will break currently working extensions. That's fine\nfor a major version upgrade, but not for a minor one, unless there are\nvery good reasons.\n\nAndres\n\n\n\n", "msg_date": "Tue, 17 Mar 2020 11:49:41 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "\n\n> On Mar 17, 2020, at 11:49 AM, Andres Freund <andres@anarazel.de> wrote:\n> \n> On 2020-03-16 16:03:51 -0700, Mark Dilger wrote:\n>> While working on object access hooks, I noticed several locations\n>> where I would expect the hook to be invoked, but no actual invocation.\n>> I think this just barely qualifies as a bug. It's debatable because\n>> whether it is a bug depends on the user's expectations and whether not\n>> invoking the hook in these cases is defensible. Does anybody have any\n>> recollection of an intentional choice not to invoke in these\n>> locations?\n> \n> I am strongly against treating this as a bug, which'd likely imply\n> backpatching. New hook invocations are a noticable behavioural change,\n> and very plausibly will break currently working extensions. That's fine\n> for a major version upgrade, but not for a minor one, unless there are\n> very good reasons.\n\nI agree that this does not need to be back-patched. I was debating whether it constitutes a bug for the purpose of putting the fix into v13 vs. punting the patch forward to the v14 cycle. I don't have a strong opinion on that.\n\nThoughts?\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:39:35 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Tue, Mar 17, 2020 at 12:39:35PM -0700, Mark Dilger wrote:\n> I agree that this does not need to be back-patched. I was debating\n> whether it constitutes a bug for the purpose of putting the fix into\n> v13 vs. punting the patch forward to the v14 cycle. I don't have a\n> strong opinion on that.\n\nI don't see any strong argument against fixing this stuff in v13,\nFWIW.\n--\nMichael", "msg_date": "Wed, 18 Mar 2020 13:33:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "> On Mar 17, 2020, at 9:33 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Tue, Mar 17, 2020 at 12:39:35PM -0700, Mark Dilger wrote:\n>> I agree that this does not need to be back-patched. I was debating\n>> whether it constitutes a bug for the purpose of putting the fix into\n>> v13 vs. punting the patch forward to the v14 cycle. I don't have a\n>> strong opinion on that.\n> \n> I don't see any strong argument against fixing this stuff in v13,\n> FWIW.\n\nHere is the latest patch. I'll go add this thread to the commitfest app now....\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 18 Mar 2020 07:50:11 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On 2020-Mar-18, Mark Dilger wrote:\n\n> Here is the latest patch.\n\nSo you insist in keeping the Drop hook calls?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:17:16 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "\n\n> On Mar 19, 2020, at 11:17 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> On 2020-Mar-18, Mark Dilger wrote:\n> \n>> Here is the latest patch.\n> \n> So you insist in keeping the Drop hook calls?\n\nMy apologies, not at all. I appear to have attached the wrong patch. Will post v3 shortly.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:30:02 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "> On Mar 19, 2020, at 11:30 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n> \n> Will post v3 shortly.\n\n\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 19 Mar 2020 11:47:46 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Thu, Mar 19, 2020 at 11:47:46AM -0700, Mark Dilger wrote:\n> On Mar 19, 2020, at 11:30 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>> Will post v3 shortly.\n\nThanks for sending a new version of the patch and removing the bits\nabout object drops. Your additions to src/backend/ look fine to me,\nso I have no objections to commit it. The only module we have in core\nthat makes use of object_access_hook is sepgsql. Adding support for\nit could be done in a separate commit for AMs, stats and user mappings\nbut we would need a use-case for it. One thing that I can see is that\neven if we test for ALTER put_your_object_type_here foo RENAME TO in\nthe module and that your patch adds one InvokeObjectPostAlterHook()\nfor ALTER RULE, we don't have support for rules in sepgsql (see\nsepgsql_object_access for OAT_POST_CREATE). So that's fine.\n\nUnfortunately, we are past feature freeze so this will have to wait\nuntil v14 opens for business to be merged, and I'll take care of it.\nOr would others prefer to not wait one extra year for those changes to\nbe released?\n\nPlease note that there is a commit fest entry, though you forgot to\nadd your name as author of the patch:\nhttps://commitfest.postgresql.org/28/2513/\n--\nMichael", "msg_date": "Mon, 20 Apr 2020 07:55:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "\n\n> On Apr 19, 2020, at 3:55 PM, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Thu, Mar 19, 2020 at 11:47:46AM -0700, Mark Dilger wrote:\n>> On Mar 19, 2020, at 11:30 AM, Mark Dilger <mark.dilger@enterprisedb.com> wrote:\n>>> Will post v3 shortly.\n> \n> Thanks for sending a new version of the patch and removing the bits\n> about object drops. Your additions to src/backend/ look fine to me,\n> so I have no objections to commit it. The only module we have in core\n> that makes use of object_access_hook is sepgsql. Adding support for\n> it could be done in a separate commit for AMs, stats and user mappings\n> but we would need a use-case for it. One thing that I can see is that\n> even if we test for ALTER put_your_object_type_here foo RENAME TO in\n> the module and that your patch adds one InvokeObjectPostAlterHook()\n> for ALTER RULE, we don't have support for rules in sepgsql (see\n> sepgsql_object_access for OAT_POST_CREATE). So that's fine.\n> \n> Unfortunately, we are past feature freeze so this will have to wait\n> until v14 opens for business to be merged, and I'll take care of it.\n> Or would others prefer to not wait one extra year for those changes to\n> be released?\n\nI don't intend to make any special pleading for this to go in after feature freeze. Let's see if others feel differently.\n\nThanks for the review!\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sun, 19 Apr 2020 16:43:45 -0700", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On 2020-Apr-20, Michael Paquier wrote:\n\n> Unfortunately, we are past feature freeze so this will have to wait\n> until v14 opens for business to be merged, and I'll take care of it.\n> Or would others prefer to not wait one extra year for those changes to\n> be released?\n\nI think it's fine to put this in at this time. It's not a new feature.\nThe only thing this needs is to go through a new release cycle so that\npeople can adjust to the new hook invocations as necessary.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 13:32:31 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Mon, Apr 20, 2020 at 01:32:31PM -0400, Alvaro Herrera wrote:\n> I think it's fine to put this in at this time. It's not a new feature.\n> The only thing this needs is to go through a new release cycle so that\n> people can adjust to the new hook invocations as necessary.\n\nOkay. Any other opinions? I am in a 50/50 state about that stuff.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 10:39:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Mon, Apr 20, 2020 at 9:40 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Okay. Any other opinions? I am in a 50/50 state about that stuff.\n\nI don't really see any reason why this couldn't be committed even at\nthis late date, but I also don't care that much. I suspect the number\nof extension authors who are likely to have to make any code changes\nis small. It's anybody's guess whether those people would like these\nchanges (because now they can support all of these object types even\nin v13, rather than having to wait another year) or dislike them\n(because it breaks something). I would actually be more inclined to\nbet on the former rather than the latter, but unless somebody speaks\nup, it's all just speculation.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 20 May 2020 13:57:31 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Wed, May 20, 2020 at 01:57:31PM -0400, Robert Haas wrote:\n> I don't really see any reason why this couldn't be committed even at\n> this late date, but I also don't care that much. I suspect the number\n> of extension authors who are likely to have to make any code changes\n> is small. It's anybody's guess whether those people would like these\n> changes (because now they can support all of these object types even\n> in v13, rather than having to wait another year) or dislike them\n> (because it breaks something). I would actually be more inclined to\n> bet on the former rather than the latter, but unless somebody speaks\n> up, it's all just speculation.\n\nThanks for the input, Robert. So, even if we are post-beta1 it looks\nlike there are more upsides than downsides to get that stuff done\nsooner than later. I propose to get that applied in the next couple\nof days, please let me know if there are any objections.\n--\nMichael", "msg_date": "Thu, 21 May 2020 09:32:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On Thu, May 21, 2020 at 09:32:55AM +0900, Michael Paquier wrote:\n> Thanks for the input, Robert. So, even if we are post-beta1 it looks\n> like there are more upsides than downsides to get that stuff done\n> sooner than later. I propose to get that applied in the next couple\n> of days, please let me know if there are any objections.\n\nHearing nothing, done. Thanks all for the discussion.\n--\nMichael", "msg_date": "Sat, 23 May 2020 14:21:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" }, { "msg_contents": "On 2020-May-23, Michael Paquier wrote:\n\n> On Thu, May 21, 2020 at 09:32:55AM +0900, Michael Paquier wrote:\n> > Thanks for the input, Robert. So, even if we are post-beta1 it looks\n> > like there are more upsides than downsides to get that stuff done\n> > sooner than later. I propose to get that applied in the next couple\n> > of days, please let me know if there are any objections.\n> \n> Hearing nothing, done. Thanks all for the discussion.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 23 May 2020 22:31:55 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Adding missing object access hook invocations" } ]
[ { "msg_contents": "Hi,\n\nIn PageIsVerified() we report a WARNING as follows:\n\n ereport(WARNING,\n (ERRCODE_DATA_CORRUPTED,\n errmsg(\"page verification failed, calculated checksum\n%u but expected %u\",\n checksum, p->pd_checksum)));\n\nHowever the error message won't have sql error code due to missing\nerrcode(). As far as I can see there are four places:\n\n$ git grep \"(ERRCODE\" | grep -v errcode\ncontrib/adminpack/adminpack.c:\n(ERRCODE_DUPLICATE_FILE,\ncontrib/adminpack/adminpack.c: (ERRCODE_DUPLICATE_FILE,\ncontrib/adminpack/adminpack.c:\n (ERRCODE_UNDEFINED_FILE,\nsrc/backend/storage/page/bufpage.c:\n(ERRCODE_DATA_CORRUPTED,\nsrc/pl/plpgsql/src/pl_exec.c: else if\n(ERRCODE_IS_CATEGORY(sqlerrstate) &&\n\nAttached patch add errcode() to these places.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 17 Mar 2020 17:37:51 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Missing errcode() in ereport" }, { "msg_contents": "On Tue, Mar 17, 2020 at 2:08 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> Hi,\n>\n> In PageIsVerified() we report a WARNING as follows:\n>\n> ereport(WARNING,\n> (ERRCODE_DATA_CORRUPTED,\n> errmsg(\"page verification failed, calculated checksum\n> %u but expected %u\",\n> checksum, p->pd_checksum)));\n>\n> However the error message won't have sql error code due to missing\n> errcode(). As far as I can see there are four places:\n>\n> $ git grep \"(ERRCODE\" | grep -v errcode\n> contrib/adminpack/adminpack.c:\n> (ERRCODE_DUPLICATE_FILE,\n> contrib/adminpack/adminpack.c: (ERRCODE_DUPLICATE_FILE,\n> contrib/adminpack/adminpack.c:\n> (ERRCODE_UNDEFINED_FILE,\n> src/backend/storage/page/bufpage.c:\n> (ERRCODE_DATA_CORRUPTED,\n> src/pl/plpgsql/src/pl_exec.c: else if\n> (ERRCODE_IS_CATEGORY(sqlerrstate) &&\n>\n> Attached patch add errcode() to these places.\n>\n\n+1. This looks like an oversight to me.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Mar 2020 14:29:47 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Tue, Mar 17, 2020 at 10:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 17, 2020 at 2:08 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > Hi,\n> >\n> > In PageIsVerified() we report a WARNING as follows:\n> >\n> > ereport(WARNING,\n> > (ERRCODE_DATA_CORRUPTED,\n> > errmsg(\"page verification failed, calculated checksum\n> > %u but expected %u\",\n> > checksum, p->pd_checksum)));\n> >\n> > However the error message won't have sql error code due to missing\n> > errcode(). As far as I can see there are four places:\n> >\n> > $ git grep \"(ERRCODE\" | grep -v errcode\n> > contrib/adminpack/adminpack.c:\n> > (ERRCODE_DUPLICATE_FILE,\n> > contrib/adminpack/adminpack.c: (ERRCODE_DUPLICATE_FILE,\n> > contrib/adminpack/adminpack.c:\n> > (ERRCODE_UNDEFINED_FILE,\n> > src/backend/storage/page/bufpage.c:\n> > (ERRCODE_DATA_CORRUPTED,\n> > src/pl/plpgsql/src/pl_exec.c: else if\n> > (ERRCODE_IS_CATEGORY(sqlerrstate) &&\n> >\n> > Attached patch add errcode() to these places.\n> >\n>\n> +1. This looks like an oversight to me.\n\ngood catch! And patch LGTM.\n\n\n", "msg_date": "Tue, 17 Mar 2020 10:26:57 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Tue, Mar 17, 2020 at 10:26:57AM +0100, Julien Rouhaud wrote:\n> On Tue, Mar 17, 2020 at 10:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>> +1. This looks like an oversight to me.\n> \n> good catch! And patch LGTM.\n\nDefinitely an oversight. All stable branches down to 9.5 have\nmistakes in the same area, with nothing extra by grepping around.\nAmit, I guess that you will take care of it?\n--\nMichael", "msg_date": "Tue, 17 Mar 2020 18:58:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Tue, Mar 17, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Mar 17, 2020 at 10:26:57AM +0100, Julien Rouhaud wrote:\n> > On Tue, Mar 17, 2020 at 10:00 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >> +1. This looks like an oversight to me.\n> >\n> > good catch! And patch LGTM.\n>\n> Definitely an oversight. All stable branches down to 9.5 have\n> mistakes in the same area, with nothing extra by grepping around.\n> Amit, I guess that you will take care of it?\n>\n\nYes, I will unless I see any objections in a day or so.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Mar 2020 15:46:17 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Tue, Mar 17, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> Definitely an oversight. All stable branches down to 9.5 have\n>> mistakes in the same area, with nothing extra by grepping around.\n>> Amit, I guess that you will take care of it?\n\n> Yes, I will unless I see any objections in a day or so.\n\nNo need to wait, it's a pretty obvious thinko.\n\nWe might want to spend some effort thinking how to find or prevent\nadditional bugs of the same ilk ... but correcting the ones already\nfound needn't wait on that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Mar 2020 10:09:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Tue, Mar 17, 2020 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Tue, Mar 17, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >> Definitely an oversight. All stable branches down to 9.5 have\n> >> mistakes in the same area, with nothing extra by grepping around.\n> >> Amit, I guess that you will take care of it?\n>\n> > Yes, I will unless I see any objections in a day or so.\n>\n> No need to wait, it's a pretty obvious thinko.\n>\n\nOkay, I will push in some time.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 09:01:28 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Wed, Mar 18, 2020 at 9:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Tue, Mar 17, 2020 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > On Tue, Mar 17, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >> Definitely an oversight. All stable branches down to 9.5 have\n> > >> mistakes in the same area, with nothing extra by grepping around.\n> > >> Amit, I guess that you will take care of it?\n> >\n> > > Yes, I will unless I see any objections in a day or so.\n> >\n> > No need to wait, it's a pretty obvious thinko.\n> >\n>\n> Okay, I will push in some time.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Mar 2020 10:27:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Wed, 18 Mar 2020 at 13:57, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Mar 18, 2020 at 9:01 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Tue, Mar 17, 2020 at 7:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > > On Tue, Mar 17, 2020 at 3:28 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > > >> Definitely an oversight. All stable branches down to 9.5 have\n> > > >> mistakes in the same area, with nothing extra by grepping around.\n> > > >> Amit, I guess that you will take care of it?\n> > >\n> > > > Yes, I will unless I see any objections in a day or so.\n> > >\n> > > No need to wait, it's a pretty obvious thinko.\n> > >\n> >\n> > Okay, I will push in some time.\n> >\n>\n> Pushed.\n\nThank you!\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 19 Mar 2020 12:31:44 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n> We might want to spend some effort thinking how to find or prevent\n> additional bugs of the same ilk ...\n\nYea, that'd be good. Trying to help people new to postgres write their\nfirst patches I found that ereport is very confusing to them - largely\nbecause the syntax doesn't make much sense. Made worse by the compiler\nerror messages being terrible in many cases.\n\nNot sure there's much we can do without changing ereport's \"signature\"\nthough :(\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Thu, 19 Mar 2020 10:59:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n>> We might want to spend some effort thinking how to find or prevent\n>> additional bugs of the same ilk ...\n\n> Yea, that'd be good. Trying to help people new to postgres write their\n> first patches I found that ereport is very confusing to them - largely\n> because the syntax doesn't make much sense. Made worse by the compiler\n> error messages being terrible in many cases.\n\n> Not sure there's much we can do without changing ereport's \"signature\"\n> though :(\n\nNow that we can rely on having varargs macros, I think we could\nstop requiring the extra level of parentheses, ie instead of\n\n ereport(ERROR,\n (errcode(ERRCODE_DIVISION_BY_ZERO),\n errmsg(\"division by zero\")));\n\nit could be just\n\n ereport(ERROR,\n errcode(ERRCODE_DIVISION_BY_ZERO),\n errmsg(\"division by zero\"));\n\n(The old syntax had better still work, of course. I'm not advocating\nrunning around and changing existing calls.)\n\nI'm not sure that this'd really move the goalposts much in terms of making\nit any less confusing, but possibly it would improve the compiler errors?\nDo you have any concrete examples of crummy error messages?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Mar 2020 14:07:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-19 14:07:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n> >> We might want to spend some effort thinking how to find or prevent\n> >> additional bugs of the same ilk ...\n> \n> > Yea, that'd be good. Trying to help people new to postgres write their\n> > first patches I found that ereport is very confusing to them - largely\n> > because the syntax doesn't make much sense. Made worse by the compiler\n> > error messages being terrible in many cases.\n> \n> > Not sure there's much we can do without changing ereport's \"signature\"\n> > though :(\n> \n> Now that we can rely on having varargs macros, I think we could\n> stop requiring the extra level of parentheses, ie instead of\n> \n> ereport(ERROR,\n> (errcode(ERRCODE_DIVISION_BY_ZERO),\n> errmsg(\"division by zero\")));\n> \n> it could be just\n> \n> ereport(ERROR,\n> errcode(ERRCODE_DIVISION_BY_ZERO),\n> errmsg(\"division by zero\"));\n> \n> (The old syntax had better still work, of course. I'm not advocating\n> running around and changing existing calls.)\n\nI think that'd be an improvement, because:\n\n> I'm not sure that this'd really move the goalposts much in terms of making\n> it any less confusing, but possibly it would improve the compiler errors?\n> Do you have any concrete examples of crummy error messages?\n\nane of the ones I saw confuse people is just:\n/home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: error: ‘ereport’ undeclared (first use in this function); did you mean ‘rresvport’?\n 3727 | ereport(FATAL,\n | ^~~~~~~\n | rresvport\n/home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: note: each undeclared identifier is reported only once for each function it appears in\n\nbecause the extra parens haven't been added.\n\n\nI personally actually hit a variant of that on a semi-regular basis:\nClosing the parens for \"rest\" early, as there's just so many parens in\nereports, especially if an errmsg argument is a function call\nitself. Which leads to a variation of the above error message. I know\nhow to address it, obviously, but I do find it somewhat annoying to deal\nwith.\n\nAnother one I've both seen and committed byself is converting an elog to\nan ereport, and not adding an errcode() around the error code - which\nsilently looks like it works.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Mar 2020 16:04:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-19 14:07:04 -0400, Tom Lane wrote:\n>> Now that we can rely on having varargs macros, I think we could\n>> stop requiring the extra level of parentheses,\n\n> I think that'd be an improvement, because:\n> ane of the ones I saw confuse people is just:\n> /home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: error: ‘ereport’ undeclared (first use in this function); did you mean ‘rresvport’?\n> 3727 | ereport(FATAL,\n> | ^~~~~~~\n> | rresvport\n> /home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: note: each undeclared identifier is reported only once for each function it appears in\n> because the extra parens haven't been added.\n\nAh, so the macro isn't expanded at all if it appears to have more than\ntwo parameters? That seems odd, but I suppose some compilers might\nwork that way. Switching to varargs would improve that for sure.\n\n> Another one I've both seen and committed byself is converting an elog to\n> an ereport, and not adding an errcode() around the error code - which\n> silently looks like it works.\n\nYou mean not adding errmsg() around the error string? Yeah, I can\nbelieve that one easily. It's sort of the same problem as forgetting\nto wrap errcode() around the ERRCODE_ constant.\n\nI think that at least some compilers will complain about side-effect-free\nsubexpressions of a comma expression. Could we restructure things so\nthat the errcode/errmsg/etc calls form a standalone comma expression\nrather than appearing to be arguments of a varargs function? The\ncompiler's got no hope of realizing there's anything wrong when it\nthinks it's passing an integer or string constant to a function it knows\nnothing about. But if it could see that nothing at all is being done with\nthe constant, maybe we'd get somewhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:32:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "I wrote:\n> I think that at least some compilers will complain about side-effect-free\n> subexpressions of a comma expression. Could we restructure things so\n> that the errcode/errmsg/etc calls form a standalone comma expression\n> rather than appearing to be arguments of a varargs function?\n\nYeah, the attached quick-hack patch seems to work really well for this.\nI find that this now works:\n\n ereport(ERROR,\n errcode(ERRCODE_DIVISION_BY_ZERO),\n errmsg(\"foo\"));\n\nwhere before it gave the weird error you quoted. Also, these both\ndraw warnings about side-effect-free expressions:\n\n ereport(ERROR,\n ERRCODE_DIVISION_BY_ZERO,\n errmsg(\"foo\"));\n\n ereport(ERROR,\n \"%d\", 42);\n\nWith gcc you just need -Wall to get such warnings, and clang\nseems to give them by default.\n\nAs a nice side benefit, the backend gets a couple KB smaller from\nremoval of errfinish's useless dummy argument.\n\nI think that we could now also change all the helper functions\n(errmsg et al) to return void instead of a dummy value, but that\nwould make the patch a lot longer and probably not move the\ngoalposts much in terms of error detection.\n\nIt also looks like we could use the same trick to get plain elog()\nto have the behavior of not evaluating its arguments when it's\nnot going to print anything. I've not poked at that yet either.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 19 Mar 2020 21:03:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-19 19:32:55 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-19 14:07:04 -0400, Tom Lane wrote:\n> >> Now that we can rely on having varargs macros, I think we could\n> >> stop requiring the extra level of parentheses,\n>\n> > I think that'd be an improvement, because:\n> > ane of the ones I saw confuse people is just:\n> > /home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: error: ‘ereport’ undeclared (first use in this function); did you mean ‘rresvport’?\n> > 3727 | ereport(FATAL,\n> > | ^~~~~~~\n> > | rresvport\n> > /home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: note: each undeclared identifier is reported only once for each function it appears in\n> > because the extra parens haven't been added.\n>\n> Ah, so the macro isn't expanded at all if it appears to have more than\n> two parameters? That seems odd, but I suppose some compilers might\n> work that way. Switching to varargs would improve that for sure.\n\nYea. Newer gcc versions do warn about a parameter mismatch count, which\nhelps. I'm not sure what the preprocessor / compiler should do intead?\n\nFWIW, clang also doesn't expand the macro.\n\n\n> > Another one I've both seen and committed byself is converting an elog to\n> > an ereport, and not adding an errcode() around the error code - which\n> > silently looks like it works.\n>\n> You mean not adding errmsg() around the error string? Yeah, I can\n> believe that one easily. It's sort of the same problem as forgetting\n> to wrap errcode() around the ERRCODE_ constant.\n\nBoth of these, I think.\n\n\n> Could we restructure things so that the errcode/errmsg/etc calls form\n> a standalone comma expression rather than appearing to be arguments of\n> a varargs function? The compiler's got no hope of realizing there's\n> anything wrong when it thinks it's passing an integer or string\n> constant to a function it knows nothing about. But if it could see\n> that nothing at all is being done with the constant, maybe we'd get\n> somewhere.\n\nWorth a try. Not doing a pointless varargs call could also end up\nreducing elog overhead a bit (right now we push a lot of 0 as vararg\narguments for no reason).\n\nA quick hack suggests that it works:\n\n/home/andres/src/postgresql/src/backend/tcop/postgres.c: In function ‘process_postgres_switches’:\n/home/andres/src/postgresql/src/backend/tcop/postgres.c:3728:27: warning: left-hand operand of comma expression has no effect [-Wunused-value]\n 3728 | (ERRCODE_SYNTAX_ERROR,\n | ^\n/home/andres/src/postgresql/src/include/utils/elog.h:126:4: note: in definition of macro ‘ereport_domain’\n 126 | rest; \\\n | ^~~~\n/home/andres/src/postgresql/src/backend/tcop/postgres.c:3727:4: note: in expansion of macro ‘ereport’\n 3727 | ereport(FATAL,\n | ^~~~~~~\n\n\nand in an optimized build the resulting code indeed looks a bit better:\n\nbefore:\n text\t data\t bss\t dec\t hex\tfilename\n8462795\t 176128\t 204464\t8843387\t 86f07b\tsrc/backend/postgres\n\nLooking at FloatExceptionHandler as an example:\n\n2860\t\t/* We're not returning, so no need to save errno */\n2861\t\tereport(ERROR,\n 0x0000000000458bc0 <+0>:\tpush %rbp\n 0x0000000000458bc1 <+1>:\tmov $0xb2d,%edx\n 0x0000000000458bc6 <+6>:\tlea 0x214c9b(%rip),%rsi # 0x66d868\n 0x0000000000458bcd <+13>:\tmov %rsp,%rbp\n 0x0000000000458bd0 <+16>:\tpush %r13\n 0x0000000000458bd2 <+18>:\txor %r8d,%r8d\n 0x0000000000458bd5 <+21>:\tlea 0x2d9dc4(%rip),%rcx # 0x7329a0 <__func__.41598>\n 0x0000000000458bdc <+28>:\tpush %r12\n 0x0000000000458bde <+30>:\tmov $0x14,%edi\n 0x0000000000458be3 <+35>:\tcallq 0x5a8c00 <errstart>\n 0x0000000000458be8 <+40>:\tlea 0x214e59(%rip),%rdi # 0x66da48\n 0x0000000000458bef <+47>:\txor %eax,%eax\n 0x0000000000458bf1 <+49>:\tcallq 0x5acb00 <errdetail>\n 0x0000000000458bf6 <+54>:\tmov %eax,%r13d\n 0x0000000000458bf9 <+57>:\tlea 0x1bf7fb(%rip),%rdi # 0x6183fb\n 0x0000000000458c00 <+64>:\txor %eax,%eax\n 0x0000000000458c02 <+66>:\tcallq 0x5ac710 <errmsg>\n 0x0000000000458c07 <+71>:\tmov $0x1020082,%edi\n 0x0000000000458c0c <+76>:\tmov %eax,%r12d\n 0x0000000000458c0f <+79>:\tcallq 0x5ac5b0 <errcode>\n 0x0000000000458c14 <+84>:\tmov %eax,%edi\n 0x0000000000458c16 <+86>:\tmov %r13d,%edx\n 0x0000000000458c19 <+89>:\tmov %r12d,%esi\n 0x0000000000458c1c <+92>:\txor %eax,%eax\n 0x0000000000458c1e <+94>:\tcallq 0x5ac020 <errfinish>\n\nvs after:\n text\t data\t bss\t dec\t hex\tfilename\n8395731\t 176128\t 204464\t8776323\t 85ea83\tsrc/backend/postgres\n2861\t\tereport(ERROR,\n 0x0000000000449dd0 <+0>:\tpush %rbp\n 0x0000000000449dd1 <+1>:\txor %r8d,%r8d\n 0x0000000000449dd4 <+4>:\tlea 0x2d8a65(%rip),%rcx # 0x722840 <__func__.41598>\n 0x0000000000449ddb <+11>:\tmov %rsp,%rbp\n 0x0000000000449dde <+14>:\tmov $0xb2d,%edx\n 0x0000000000449de3 <+19>:\tlea 0x2137ee(%rip),%rsi # 0x65d5d8\n 0x0000000000449dea <+26>:\tmov $0x14,%edi\n 0x0000000000449def <+31>:\tcallq 0x5992e0 <errstart>\n 0x0000000000449df4 <+36>:\tmov $0x1020082,%edi\n 0x0000000000449df9 <+41>:\tcallq 0x59cc80 <errcode>\n 0x0000000000449dfe <+46>:\tlea 0x1be496(%rip),%rdi # 0x60829b\n 0x0000000000449e05 <+53>:\txor %eax,%eax\n 0x0000000000449e07 <+55>:\tcallq 0x59cde0 <errmsg>\n 0x0000000000449e0c <+60>:\tlea 0x2139a5(%rip),%rdi # 0x65d7b8\n 0x0000000000449e13 <+67>:\txor %eax,%eax\n 0x0000000000449e15 <+69>:\tcallq 0x59d1d0 <errdetail>\n 0x0000000000449e1a <+74>:\tcallq 0x59c700 <errfinish>\n\n\nI wonder if it'd become a relevant backpatch pain if we started to have\nsome ereports() without the additional parens in 13+. Would it perhaps\nmake sense to backpatch just the part that removes the need for the\nparents, but not the return type changes?\n\nThis is patch 0001.\n\n\nWe can also remove elog() support code now, because with __VA_ARGS__\nsupport it's really just a wrapper for ereport(elevel,\nerrmsg(__VA_ARGS_)). This is patch 0002.\n\n\nI think it might also be a good idea to move __FILE__, __LINE__,\nPG_FUNCNAME_MACRO, domain from being parameters to errstart to\nerrfinish. For elevel < ERROR its sad that we currently pass them even\nif we don't emit the message. This is patch 0003.\n\n\nI wonder if its worth additionally ensuring that errcode, errmsg,\n... are only called within errstart/errfinish? We could have errstart()\nreturn the error being \"prepared\" and store it in a local variable, and\nmake errmsg() etc macros that internally pass that variable to the\n\"real\" errmsg() etc. Like\n\n#define errmsg(...) errmsg_impl(current_error, __VA_ARGS__)\n\nand set up current_error in ereport_domain() roughly like\n do {\n ErrorData *current_error_;\n if ((current_error_ = errstart(elevel, the, rest)) != NULL)\n {\n ...\n errfinish()\n }\n ...\n\nprobably not worth it?\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 19 Mar 2020 18:59:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-19 21:03:17 -0400, Tom Lane wrote:\n> I wrote:\n> > I think that at least some compilers will complain about side-effect-free\n> > subexpressions of a comma expression. Could we restructure things so\n> > that the errcode/errmsg/etc calls form a standalone comma expression\n> > rather than appearing to be arguments of a varargs function?\n>\n> Yeah, the attached quick-hack patch seems to work really well for\n> this.\n\nHeh, you're too fast. I just sent something similar, and a few followup\npatches.\n\nWhat is your thinking about pain around backpatching it might introduce?\nWe can't easily backpatch support for ereport-without-extra-parens, I\nthink, because it needs __VA_ARGS__?\n\n\n> As a nice side benefit, the backend gets a couple KB smaller from\n> removal of errfinish's useless dummy argument.\n\nI don't think it's the removal of the dummy parameter itself that\nconstitutes most of the savings, but instead it's not having to push the\nreturn value of errmsg(), errcode(), et al as vararg arguments.\n\n\n> I think that we could now also change all the helper functions\n> (errmsg et al) to return void instead of a dummy value, but that\n> would make the patch a lot longer and probably not move the\n> goalposts much in terms of error detection.\n\nI did include that in my prototype patch. Agree that it's not necessary\nfor the error detection capability, but it seems misleading to leave the\nreturn values around if they're not actually needed.\n\n\n> It also looks like we could use the same trick to get plain elog()\n> to have the behavior of not evaluating its arguments when it's\n> not going to print anything. I've not poked at that yet either.\n\nI've included a patch for that. I think it's now sufficient to\n#define elog(elevel, ...) \\\n\tereport(elevel, errmsg(__VA_ARGS__))\n\nwhich imo is quite nice, as it allows us to get rid of a lot of\nduplication.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:12:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wonder if it'd become a relevant backpatch pain if we started to have\n> some ereports() without the additional parens in 13+.\n\nYeah, it would be a nasty backpatch hazard.\n\n> Would it perhaps\n> make sense to backpatch just the part that removes the need for the\n> parents, but not the return type changes?\n\nI was just looking into that. It would be pretty painless to do it\nin v12, but before that we weren't requiring C99. Having said that,\ntrolling the buildfarm database shows exactly zero active members\nthat don't report having __VA_ARGS__, in the branches where we test\nthat. (And pg_config.h.win32 was assuming that MSVC had that, too.)\n\nCould we get away with moving the compiler goalposts for the back\nbranches? I dunno, but it's a fact that we aren't testing anymore\nwith any compilers that would complain about unconditional use of\n__VA_ARGS__. So it might be broken already and we wouldn't know it.\n(I suspect the last buildfarm animal that would've complained about\nthis was pademelon, which I retired more than a year ago IIRC.)\n\n> We can also remove elog() support code now, because with __VA_ARGS__\n> support it's really just a wrapper for ereport(elevel,\n> errmsg(__VA_ARGS_)). This is patch 0002.\n\nYeah, something similar had occurred to me but I didn't write the patch\nyet. Note it should be errmsg_internal(); also, does the default\nfor errcode come out the same?\n\n> I think it might also be a good idea to move __FILE__, __LINE__,\n> PG_FUNCNAME_MACRO, domain from being parameters to errstart to\n> errfinish. For elevel < ERROR its sad that we currently pass them even\n> if we don't emit the message. This is patch 0003.\n\nOh, that's a good idea that hadn't occurred to me.\n\n> I wonder if its worth additionally ensuring that errcode, errmsg,\n> ... are only called within errstart/errfinish?\n\nMeh. That's wrong at least for errcontext(), and I'm not sure it's\nreally worth anything to enforce it for the others.\n\nI think the key decision we'd have to make to move forward on this\nis to decide whether it's still project style to prefer the extra\nparens, or whether we want new code to do without them going\nforward. If we don't want to risk requiring __VA_ARGS__ for the\nold branches then I'd vote in favor of keeping the parens as\npreferred style, at least till v11 is out of support. If we do\nchange that in the back branches then there'd be reason to prefer\nto go without parens. New coders might still be confused about\nwhy there are all these calls with the useless parens, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Mar 2020 22:32:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-19 22:32:30 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if it'd become a relevant backpatch pain if we started to have\n> > some ereports() without the additional parens in 13+.\n> \n> Yeah, it would be a nasty backpatch hazard.\n> \n> > Would it perhaps\n> > make sense to backpatch just the part that removes the need for the\n> > parents, but not the return type changes?\n> \n> I was just looking into that. It would be pretty painless to do it\n> in v12, but before that we weren't requiring C99. Having said that,\n> trolling the buildfarm database shows exactly zero active members\n> that don't report having __VA_ARGS__, in the branches where we test\n> that. (And pg_config.h.win32 was assuming that MSVC had that, too.)\n> \n> Could we get away with moving the compiler goalposts for the back\n> branches? I dunno, but it's a fact that we aren't testing anymore\n> with any compilers that would complain about unconditional use of\n> __VA_ARGS__. So it might be broken already and we wouldn't know it.\n\nFWIW, I did grep for unprotected uses, and didn't find anything.\n\n\n> (I suspect the last buildfarm animal that would've complained about\n> this was pademelon, which I retired more than a year ago IIRC.)\n\nI guess a query that searches the logs backwards for animals without\n__VA_ARGS__ would be a good idea?\n\n\n> > We can also remove elog() support code now, because with __VA_ARGS__\n> > support it's really just a wrapper for ereport(elevel,\n> > errmsg(__VA_ARGS_)). This is patch 0002.\n> \n> Yeah, something similar had occurred to me but I didn't write the patch\n> yet. Note it should be errmsg_internal();\n\nOh, right.\n\n\n> also, does the default for errcode come out the same?\n\nI think so - there's no distinct code setting sqlerrcode in\nelog_start/finish. That already relied on errstart().\n\n\n> > I wonder if its worth additionally ensuring that errcode, errmsg,\n> > ... are only called within errstart/errfinish?\n> \n> Meh. That's wrong at least for errcontext(), and I'm not sure it's\n> really worth anything to enforce it for the others.\n\nYea, I'm not convinced either. Especially after changing the err* return\ntypes to void, as that presumably will cause errors with a lot of\nincorrect parens, e.g. about a function with void return type as an\nargument to errmsg().\n\n\n> I think the key decision we'd have to make to move forward on this\n> is to decide whether it's still project style to prefer the extra\n> parens, or whether we want new code to do without them going\n> forward. If we don't want to risk requiring __VA_ARGS__ for the\n> old branches then I'd vote in favor of keeping the parens as\n> preferred style, at least till v11 is out of support.\n\nAgreed.\n\n\n> If we do change that in the back branches then there'd be reason to\n> prefer to go without parens. New coders might still be confused about\n> why there are all these calls with the useless parens, though.\n\nThat seems to be an acceptable pain, from my POV.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Mar 2020 21:56:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-19 22:32:30 -0400, Tom Lane wrote:\n>> Could we get away with moving the compiler goalposts for the back\n>> branches? I dunno, but it's a fact that we aren't testing anymore\n>> with any compilers that would complain about unconditional use of\n>> __VA_ARGS__. So it might be broken already and we wouldn't know it.\n\n> FWIW, I did grep for unprotected uses, and didn't find anything.\n\nYeah, I also grepped the v11 branch for that.\n\n>> (I suspect the last buildfarm animal that would've complained about\n>> this was pademelon, which I retired more than a year ago IIRC.)\n\n> I guess a query that searches the logs backwards for animals without\n> __VA_ARGS__ would be a good idea?\n\nI did a more wide-ranging scan, looking at the 9.4 branch as far back\nas 2015. Indeed, pademelon is the only animal that ever reported\nnot having __VA_ARGS__ in that timeframe.\n\nSo I've got mixed emotions about this. On the one hand, it seems\nquite unlikely that anyone would ever object if we started requiring\n__VA_ARGS__ in the back branches. On the other hand, it's definitely\nnot project policy to change requirements like that in minor releases.\nAlso the actual benefit might not be much. If anyone did mistakenly\nback-patch a fix that included a paren-free ereport, the buildfarm\nwould point out the error soon enough.\n\nI thought for a little bit about making the back branches define ereport\nwith \"...\" if HAVE__VA_ARGS and otherwise not, but if we did that then\nthe buildfarm would *not* complain about paren-free ereports in the back\nbranches. I think that would inevitably lead to there soon being some,\nso that we'd effectively be requiring __VA_ARGS__ anyway. (I suppose\nI could resurrect pademelon ... but who knows whether that old war\nhorse will keep working for another four years till v11 is retired.)\n\nOn balance I'm leaning towards keeping the parens as preferred style\nfor now, adjusting v12 so that the macro will allow paren omission\nbut we don't break ABI, and not touching the older branches. But\nif there's a consensus to require __VA_ARGS__ in the back branches,\nI won't stand in the way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Mar 2020 10:40:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On 2020-Mar-19, Tom Lane wrote:\n\n> I think the key decision we'd have to make to move forward on this\n> is to decide whether it's still project style to prefer the extra\n> parens, or whether we want new code to do without them going\n> forward. If we don't want to risk requiring __VA_ARGS__ for the\n> old branches then I'd vote in favor of keeping the parens as\n> preferred style, at least till v11 is out of support. If we do\n> change that in the back branches then there'd be reason to prefer\n> to go without parens. New coders might still be confused about\n> why there are all these calls with the useless parens, though.\n\nIt seems fine to accept new code in pg14 without the extra parens. All\nexisting committers are (or should be) used to the style with the\nparens, so it's unlikely that we'll mess up when backpatching bugfixes;\nand even if we do, the buildfarm would alert us to that soon enough.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 20 Mar 2020 11:42:02 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "I wrote:\n> On balance I'm leaning towards keeping the parens as preferred style\n> for now, adjusting v12 so that the macro will allow paren omission\n> but we don't break ABI, and not touching the older branches.\n\nHearing no objections, I started to review Andres' patchset with\nthat plan in mind. I noted two things that I don't agree with:\n\n1. I think we should write the ereport macro as\n\n if (errstart(...)) \\\n __VA_ARGS__, errfinish(...); \\\n\nas I had it, not\n\n if (errstart(...)) \\\n { \\\n __VA_ARGS__; \\\n errfinish(...); \\\n } \\\n\nas per Andres. The reason is that I don't have as much faith in the\nlatter construct producing warnings for no-op expressions.\n\n2. We cannot postpone the passing of the \"domain\" argument as Andres'\n0003 patch proposes, because that has to be available to the auxiliary\nerror functions so they do message translation properly. Maybe it'd\nbe possible to finagle things to postpone translation to the very end,\nbut the provisions for errcontext messages being translated in a different\ndomain would make that pretty ticklish. Frankly I don't think it'd be\nworth the complication. There is a clear benefit in delaying the passing\nof filename (since we can skip that strchr() call) but beyond that it\nseems pretty marginal.\n\nOther than that it looks pretty good. I wrote some documentation\nadjustments and re-split the patch into 0001, which is proposed for\nback-patch into v12, and 0002 which would have to be HEAD only.\n\nOne thing I'm not totally sure about is whether we can rely on\nthis change:\n\n-extern void errfinish(int dummy,...);\n+extern void errfinish(void);\n\nbeing fully ABI-transparent. One would think that as long as errfinish\ndoesn't inspect its argument(s) it doesn't matter whether any are passed,\nbut maybe somewhere there's an architecture where the possible presence\nof varargs arguments makes for a significant difference in the calling\nconvention? We could leave that change out of the v12 patch if we're\nworried about it.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 23 Mar 2020 17:24:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-23 17:24:49 -0400, Tom Lane wrote:\n> I wrote:\n> > On balance I'm leaning towards keeping the parens as preferred style\n> > for now, adjusting v12 so that the macro will allow paren omission\n> > but we don't break ABI, and not touching the older branches.\n> \n> Hearing no objections, I started to review Andres' patchset with\n> that plan in mind. I noted two things that I don't agree with:\n> \n> 1. I think we should write the ereport macro as\n> \n> if (errstart(...)) \\\n> __VA_ARGS__, errfinish(...); \\\n> \n> as I had it, not\n> \n> if (errstart(...)) \\\n> { \\\n> __VA_ARGS__; \\\n> errfinish(...); \\\n> } \\\n> \n> as per Andres. The reason is that I don't have as much faith in the\n> latter construct producing warnings for no-op expressions.\n\nHm. I don't think it'll be better, but I don't have a problem going with\nyours either.\n\n\n> 2. We cannot postpone the passing of the \"domain\" argument as Andres'\n> 0003 patch proposes, because that has to be available to the auxiliary\n> error functions so they do message translation properly.\n\nAh, good point.\n\n\nI wondered before whether there's a way we could move the elevel check\nin errstart to the macro. For it to be a win we'd presumably have to\nhave a \"synthesized\" log_level variable, basically\nmin(log_min_messages, client_min_messages, ERROR).\n\nProbably not worth it.\n\n\n> Maybe it'd be possible to finagle things to postpone translation to\n> the very end, but the provisions for errcontext messages being\n> translated in a different domain would make that pretty ticklish.\n> Frankly I don't think it'd be worth the complication. There is a\n> clear benefit in delaying the passing of filename (since we can skip\n> that strchr() call) but beyond that it seems pretty marginal.\n\nFair enough.\n\n\n> Other than that it looks pretty good. I wrote some documentation\n> adjustments and re-split the patch into 0001, which is proposed for\n> back-patch into v12, and 0002 which would have to be HEAD only.\n\nCool.\n\n\n> One thing I'm not totally sure about is whether we can rely on\n> this change:\n> \n> -extern void errfinish(int dummy,...);\n> +extern void errfinish(void);\n> \n> being fully ABI-transparent. One would think that as long as errfinish\n> doesn't inspect its argument(s) it doesn't matter whether any are passed,\n> but maybe somewhere there's an architecture where the possible presence\n> of varargs arguments makes for a significant difference in the calling\n> convention? We could leave that change out of the v12 patch if we're\n> worried about it.\n\nI would vote for leaving that out of the v12 patch. I'm less worried\nabout odd architectures, and more about sanitizers and/or compilers\nemitting \"security enhanced\" code checking signatures match etc\n(\"control flow integrity\").\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 14:44:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I wondered before whether there's a way we could move the elevel check\n> in errstart to the macro. For it to be a win we'd presumably have to\n> have a \"synthesized\" log_level variable, basically\n> min(log_min_messages, client_min_messages, ERROR).\n> Probably not worth it.\n\nYeah, I don't really agree with that idea. The whole business of which\nelevels will trigger output is fundamentally policy, not mechanism,\nand you do not want policy decisions embedded into ABI so that there\nare hundreds of copies of them. It's a loss for code size and a worse\nloss if you ever want to change the behavior.\n\n> On 2020-03-23 17:24:49 -0400, Tom Lane wrote:\n>> One thing I'm not totally sure about is whether we can rely on\n>> this change:\n>> \n>> -extern void errfinish(int dummy,...);\n>> +extern void errfinish(void);\n>> \n>> being fully ABI-transparent. One would think that as long as errfinish\n>> doesn't inspect its argument(s) it doesn't matter whether any are passed,\n>> but maybe somewhere there's an architecture where the possible presence\n>> of varargs arguments makes for a significant difference in the calling\n>> convention? We could leave that change out of the v12 patch if we're\n>> worried about it.\n\n> I would vote for leaving that out of the v12 patch. I'm less worried\n> about odd architectures, and more about sanitizers and/or compilers\n> emitting \"security enhanced\" code checking signatures match etc\n> (\"control flow integrity\").\n\nYeah, good point. Let's skinny the v12 patch down to just macro\nand docs changes, then, not touching elog.c at all.\n\nI made a commitfest entry for this just to see what the cfbot\nthinks of it, but if there are not objections we should go ahead\nand push in a day or so, IMO.\n\n\t\t\tregards, tom lane\n\nBTW, the CF app is showing me as the first author, even though\nI definitely put you in first. Guess it doesn't care about\nauthor ordering ... is that a bug to be fixed?\n\n\n", "msg_date": "Mon, 23 Mar 2020 17:57:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On 2020-03-23 17:24:49 -0400, Tom Lane wrote:\n> Hearing no objections, I started to review Andres' patchset with\n> that plan in mind.\n\nThanks for pushing the first part!\n\n\n", "msg_date": "Tue, 24 Mar 2020 12:38:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-23 17:24:49 -0400, Tom Lane wrote:\n>> Hearing no objections, I started to review Andres' patchset with\n>> that plan in mind.\n\n> Thanks for pushing the first part!\n\nI pushed all of it, actually.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Mar 2020 16:29:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Wed, Mar 25, 2020 at 9:30 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-23 17:24:49 -0400, Tom Lane wrote:\n> >> Hearing no objections, I started to review Andres' patchset with\n> >> that plan in mind.\n>\n> > Thanks for pushing the first part!\n>\n> I pushed all of it, actually.\n\nI think this caused anole to say:\n\n\"reloptions.c\", line 1362: error #2042: operand types are incompatible\n(\"void\" and \"int\")\n errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n\n\n", "msg_date": "Wed, 25 Mar 2020 18:27:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I think this caused anole to say:\n\n> \"reloptions.c\", line 1362: error #2042: operand types are incompatible\n> (\"void\" and \"int\")\n> errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n\nYeah, I was just looking at that :-(\n\nWe could revert the change to have these functions return void,\nor we could run around and change the places with this usage\npattern to use \"(void) 0\" instead of just \"0\". The latter would\nbe somewhat painful if only minority compilers warn, though.\nAlso, I don't think that having to change ereport usage was part\nof the agreed-to plan here ... so I'm leaning to the former.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 01:39:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "I wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> I think this caused anole to say:\n>> \"reloptions.c\", line 1362: error #2042: operand types are incompatible\n>> (\"void\" and \"int\")\n>> errdetail_internal(\"%s\", _(optenum->detailmsg)) : 0));\n\n> Yeah, I was just looking at that :-(\n\n> We could revert the change to have these functions return void,\n> or we could run around and change the places with this usage\n> pattern to use \"(void) 0\" instead of just \"0\". The latter would\n> be somewhat painful if only minority compilers warn, though.\n> Also, I don't think that having to change ereport usage was part\n> of the agreed-to plan here ... so I'm leaning to the former.\n\nDone that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:18:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "On Fri, 20 Mar 2020, 01:59 Andres Freund, <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n> > We might want to spend some effort thinking how to find or prevent\n> > additional bugs of the same ilk ...\n>\n> Yea, that'd be good. Trying to help people new to postgres write their\n> first patches I found that ereport is very confusing to them - largely\n> because the syntax doesn't make much sense. Made worse by the compiler\n> error messages being terrible in many cases.\n\n\nVery much agreed.\n\nI'd have found it helpful to just have the docs explain clearly how it\nworks by chaining the comma operator using functions with ignored return\nvalues.\n\nThat would also help people understand how they can make parts of an\nereport conditional, e.g. only set errdetail() if there additional info is\ncurrently available W/O duplicating the rest of the ereport .\n\n\nNot sure there's much we can do without changing ereport's \"signature\"\n> though :(\n>\n> Regards,\n>\n> Andres\n>\n>\n>\n\nOn Fri, 20 Mar 2020, 01:59 Andres Freund, <andres@anarazel.de> wrote:Hi,\n\nOn 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n> We might want to spend some effort thinking how to find or prevent\n> additional bugs of the same ilk ...\n\nYea, that'd be good.  Trying to help people new to postgres write their\nfirst patches I found that ereport is very confusing to them - largely\nbecause the syntax doesn't make much sense. Made worse by the compiler\nerror messages being terrible in many cases.Very much agreed.I'd have found it helpful to just have the docs explain clearly how it works by chaining the comma operator using functions with ignored return values. That would also help people understand how they can make parts of an ereport conditional, e.g. only set errdetail() if there additional info is currently available W/O duplicating the rest of the ereport .\nNot sure there's much we can do without changing ereport's \"signature\"\nthough :(\n\nRegards,\n\nAndres", "msg_date": "Mon, 30 Mar 2020 11:54:03 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Craig Ringer <craig@2ndquadrant.com> writes:\n> I'd have found it helpful to just have the docs explain clearly how it\n> works by chaining the comma operator using functions with ignored return\n> values.\n\nWant to write some text?\n\n> That would also help people understand how they can make parts of an\n> ereport conditional, e.g. only set errdetail() if there additional info is\n> currently available W/O duplicating the rest of the ereport .\n\nThere are examples of that in the tree, of course, but maybe an\nexample in the docs wouldn't be bad either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Mar 2020 23:57:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" }, { "msg_contents": "Hi,\n\nOn 2020-03-30 11:54:03 +0800, Craig Ringer wrote:\n> On Fri, 20 Mar 2020, 01:59 Andres Freund, <andres@anarazel.de> wrote:\n> > On 2020-03-17 10:09:18 -0400, Tom Lane wrote:\n> > > We might want to spend some effort thinking how to find or prevent\n> > > additional bugs of the same ilk ...\n> >\n> > Yea, that'd be good. Trying to help people new to postgres write their\n> > first patches I found that ereport is very confusing to them - largely\n> > because the syntax doesn't make much sense. Made worse by the compiler\n> > error messages being terrible in many cases.\n> \n> \n> Very much agreed.\n> \n> I'd have found it helpful to just have the docs explain clearly how it\n> works by chaining the comma operator using functions with ignored return\n> values.\n\nIDK, that seems like it'll be a bit too implementation specific.\n\n\n> That would also help people understand how they can make parts of an\n> ereport conditional, e.g. only set errdetail() if there additional info is\n> currently available W/O duplicating the rest of the ereport .\n\nWell, they can do whatever they can in any function argument list. I\ndon't see why the ereport docs are a good place to explain ... ? ... :\n... ? Or am I misunderstanding what you suggest here?\n\n\n> Not sure there's much we can do without changing ereport's \"signature\"\n\nI think the changes we just did already improved the situation a good\nbit. Both by obviating the need for the additional parentheses, as well\nas making parameters outside of err*() trigger warnings, and also\ngenerally better error/warning messages.\n\n\nI still think we might want to do a bigger change of the logging\nAPIs. See the messages leading up to\nhttps://postgr.es/m/20190805214952.dmnn2oetdquixpp4%40alap3.anarazel.de\nand the message linked from there.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Mar 2020 21:07:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Missing errcode() in ereport" } ]
[ { "msg_contents": "I was trying to figure out what exactly the \"crosscheck snapshot\" does in the\nRI checks, and hit some assertion failures:\n\npostgres=# create table p(i int primary key);\nCREATE TABLE\npostgres=# create table f (i int references p on delete cascade on update cascade deferrable initially deferred);\nCREATE TABLE\npostgres=# insert into p values (1);\nINSERT 0 1\npostgres=# begin isolation level repeatable read;\nBEGIN\npostgres=*# table f;\n i \n---\n(0 rows)\n\n\nIn another session:\n\npostgres=# insert into f values (1);\nINSERT 0 1\n\nBack in the first session:\n\npostgres=*# delete from p where i=1;\nTRAP: FailedAssertion(\"!(tp.t_data->t_infomask & HEAP_XMAX_INVALID)\", File: \"heapam.c\", Line: 2652)\n\nI'm not familiar enough with this code but I wonder if it's only about\nincorrect assertions. When I commented out some, I got error message that\nmakes sense to me:\n\npostgres=*# delete from p where i=1;\n2020-03-17 11:59:19.214 CET [89379] ERROR: could not serialize access due to concurrent update\n2020-03-17 11:59:19.214 CET [89379] CONTEXT: SQL statement \"DELETE FROM ONLY \"public\".\"f\" WHERE $1 OPERATOR(pg_catalog.=) \"i\"\"\n2020-03-17 11:59:19.214 CET [89379] STATEMENT: delete from p where i=1;\nERROR: could not serialize access due to concurrent update\nCONTEXT: SQL statement \"DELETE FROM ONLY \"public\".\"f\" WHERE $1 OPERATOR(pg_catalog.=) \"i\"\"\n\n\nSimilarly, if the test ends with an UPDATE statement, I get this failure:\n\npostgres=*# update p set i=i+1 where i=1;\nTRAP: FailedAssertion(\"!ItemPointerEquals(&oldtup.t_self, &oldtup.t_data->t_ctid)\", File: \"heapam.c\", Line: 3275)\n\nLikewise, with the Assert() statements commented out, the right thing seems to\nhappen:\n\n2020-03-17 11:57:04.810 CET [88678] ERROR: could not serialize access due to concurrent update\n2020-03-17 11:57:04.810 CET [88678] CONTEXT: SQL statement \"UPDATE ONLY \"public\".\"f\" SET \"i\" = $1 WHERE $2 OPERATOR(pg_catalog.=) \"i\"\"\n2020-03-17 11:57:04.810 CET [88678] STATEMENT: update p set i=i+1 where i=1;\nERROR: could not serialize access due to concurrent update\nCONTEXT: SQL statement \"UPDATE ONLY \"public\".\"f\" SET \"i\" = $1 WHERE $2 OPERATOR(pg_catalog.=) \"i\"\"\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 17 Mar 2020 12:06:48 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Assert() failures during RI checks" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> writes:\n> I was trying to figure out what exactly the \"crosscheck snapshot\" does in the\n> RI checks, and hit some assertion failures:\n\nYeah, your example reproduces for me.\n\n> I'm not familiar enough with this code but I wonder if it's only about\n> incorrect assertions.\n\nMmm ... not really. I dug around in the history a bit, and there are\na few moving parts here. The case that's problematic is where the\n\"crosscheck\" stanza starting at heapam.c:2639 fires, and replaces\nresult = TM_Ok with result = TM_Updated. That causes the code to\ngo into the next stanza, which is all about trying to follow a ctid\nupdate link so it can report a valid TID to the caller along with\nthe failure result. But in this case, the tuple at hand has never\nbeen updated, so the assertions fire.\n\nThe crosscheck stanza dates back to 55d85f42a, and at the time,\njust setting result = TM_Updated was sufficient to make the\nthen-far-simpler next stanza do the right thing --- basically,\nwe wanted to just release whatever locks we hold and return the\nfailure result. However, later patches added assertions to verify\nthat that next stanza was doing something sane ... and really, it\nisn't in this case.\n\nAnother thing that is very peculiar in this area is that the initial\nassertion in the second stanza allows the case of result == TM_Deleted.\nIt makes *no* sense to allow that if what we think we're doing is\nchasing a ctid update link. That alternative was not allowed until\nreally recently (Andres' commit 5db6df0c0, which appears otherwise\nto be just refactoring), and I wonder how carefully it was really\nthought through.\n\nSo I think there are two somewhat separable issues we have to address:\n\n1. The crosscheck stanza can't get away with just setting result =\nTM_Updated. Maybe the best thing is for it not to try to share code\nwith what follows, but take responsibility for releasing locks and\nreturning consistent data to the caller on its own. I'm not quite\nsure what's the most consistent thing for it to return anyway, or\nhow much we care about what TID is returned in this case. All we\nreally want is for an error to be raised, and I don't think the\nerror path is going to look too closely at the returned TID.\n\n2. Does the following stanza actually need to allow for the case\nof a deleted tuple as well as an updated one? If so, blindly\ntrying to follow the ctid link is not so cool.\n\nI think the situation in heap_update is just the same, though\nit's unclear why the code is slightly different. The extra\nAssert in that crosscheck stanza certainly appears 100% redundant\nwith the later Assert, though both are wrong per this analysis.\n\nIn a non-assert build, this would just boil down to what TID\nis returned along with the failure result code, so it might\nbe that there's no obvious bug, depending on what callers are\ndoing with that TID. That would perhaps explain the lack of\nassociated field trouble reports. These issues are pretty old\nso you'd think we'd have heard about it if there were live bugs.\n\nIn any case, seems like we're missing some isolation test\ncoverage here ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Mar 2020 15:00:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" }, { "msg_contents": "Hi,\n\nOn 2020-03-22 15:00:51 -0400, Tom Lane wrote:\n> Antonin Houska <ah@cybertec.at> writes:\n> > I was trying to figure out what exactly the \"crosscheck snapshot\" does in the\n> > RI checks, and hit some assertion failures:\n> \n> Yeah, your example reproduces for me.\n> \n> > I'm not familiar enough with this code but I wonder if it's only about\n> > incorrect assertions.\n> \n> Mmm ... not really. I dug around in the history a bit, and there are\n> a few moving parts here. The case that's problematic is where the\n> \"crosscheck\" stanza starting at heapam.c:2639 fires, and replaces\n> result = TM_Ok with result = TM_Updated. That causes the code to\n> go into the next stanza, which is all about trying to follow a ctid\n> update link so it can report a valid TID to the caller along with\n> the failure result. But in this case, the tuple at hand has never\n> been updated, so the assertions fire.\n> \n> The crosscheck stanza dates back to 55d85f42a, and at the time,\n> just setting result = TM_Updated was sufficient to make the\n> then-far-simpler next stanza do the right thing --- basically,\n> we wanted to just release whatever locks we hold and return the\n> failure result. However, later patches added assertions to verify\n> that that next stanza was doing something sane ... and really, it\n> isn't in this case.\n>\n> Another thing that is very peculiar in this area is that the initial\n> assertion in the second stanza allows the case of result == TM_Deleted.\n> It makes *no* sense to allow that if what we think we're doing is\n> chasing a ctid update link. That alternative was not allowed until\n> really recently (Andres' commit 5db6df0c0, which appears otherwise\n> to be just refactoring), and I wonder how carefully it was really\n> thought through.\n\nBefore that commit HeapTupleUpdated represented both deletions and\nupdates. When splitting them, and where it wasn't obvious only one of\nthe two could apply, I opted to continue to allow both. After all\npreviously both were allowed too. IOW, I think it was actually\npreviously allowed?\n\nIn this case, isn't it clearly required to accept TM_Deleted? The HTSU\nafter l1: obviously can return that, no?\n\n\nI think what the problem with 5db6df0c0 here could be is that it added\n+ Assert(result != TM_Updated ||\n+ !ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid));\nas a new restriction. That's likely because TM_Updated should indicate\nan update, and that should imply we have a ctid link - and that callers\nchecked this previously to detect whether an update happened. But\nthat's not the case here, due to the way the crosscheck stanza currently\nworks.\n\nE.g. in <12, callers like ExecDelete() checked things like\n- if (ItemPointerIndicatesMovedPartitions(&hufd.ctid))\n- ereport(ERROR,\n- (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n- errmsg(\"tuple to be deleted was already moved to another partition due to concurrent update\")));\n-\n- if (!ItemPointerEquals(tupleid, &hufd.ctid))\n- {\n\nbut that doesn't really work without exposing too much detail of the\nAM to those callsites.\n\nI wonder if we shouldn't just change the crosscheck case to set\nsomething other than TM_Updated, as it's not really accurate to say the\ntuple was updated. Either something like TM_Deleted, or perhaps a new\nreturn code? I think that's effectively how it was treated before the\nsplit of HeapTupleUpdated into TM_Deleted and TM_Updated.\n\n\n> So I think there are two somewhat separable issues we have to address:\n> \n> 1. The crosscheck stanza can't get away with just setting result =\n> TM_Updated. Maybe the best thing is for it not to try to share code\n> with what follows, but take responsibility for releasing locks and\n> returning consistent data to the caller on its own. I'm not quite\n> sure what's the most consistent thing for it to return anyway, or\n> how much we care about what TID is returned in this case. All we\n> really want is for an error to be raised, and I don't think the\n> error path is going to look too closely at the returned TID.\n\nIf we want to go for that, perhaps the easiest way to handle this is to\nhave an error path at the end of heap_delete that we reach with two\ngotos. Like\n\ndelete_not_ok:\n\t\tAssert(!(tp.t_data->t_infomask & HEAP_XMAX_INVALID));\n\t\tAssert(result != TM_Updated ||\n\t\t\t !ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid));\ndelete_no_ok_crosscheck:\n\t\tAssert(result == TM_SelfModified ||\n\t\t\t result == TM_Updated ||\n\t\t\t result == TM_Deleted ||\n\t\t\t result == TM_BeingModified);\n...\n\n\n\n\n> 2. Does the following stanza actually need to allow for the case\n> of a deleted tuple as well as an updated one? If so, blindly\n> trying to follow the ctid link is not so cool.\n\nWhich ctid link following are you referring to? I'm not sure I quite\nfollow.\n\n\n> I think the situation in heap_update is just the same, though\n> it's unclear why the code is slightly different. The extra\n> Assert in that crosscheck stanza certainly appears 100% redundant\n> with the later Assert, though both are wrong per this analysis.\n\nI assume Haribabu, Alexander (who I unfortunately forgot to credit in\nthe commit message) or I added it because it's useful to differentiate\nwhether the assert is raised because of the crosscheck, or the \"general\"\ncode paths. I assume it was added because calling code did check for\nctid difference to understand the difference between HeapTupleUpdated\nimplying a delete or an update. But for the crosscheck case (most of?)\nthose wouldn't ever reach the ctid equivalency check, because of\n+ if (IsolationUsesXactSnapshot())\n+ ereport(ERROR,\n+ (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n+ errmsg(\"could not serialize access due to concurrent update\")));\nlike checks beforehand.\n\n\n> In a non-assert build, this would just boil down to what TID\n> is returned along with the failure result code, so it might\n> be that there's no obvious bug, depending on what callers are\n> doing with that TID. That would perhaps explain the lack of\n> associated field trouble reports. These issues are pretty old\n> so you'd think we'd have heard about it if there were live bugs.\n\nI think the if (IsolationUsesXactSnapshot()) ereport(ERROR, ...)\ntype checks hould prevent the ctid from being relevant - unless we\nsomehow end up with a crosscheck snapshot without\nIsolationUsesXactSnapshot() being true.\n\n\n> In any case, seems like we're missing some isolation test\n> coverage here ...\n\nIndeed. I added a good number of additional EPQ tests, but they're\neither largely or exclusively for READ COMMITTED...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 22 Mar 2020 14:18:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-22 15:00:51 -0400, Tom Lane wrote:\n>> Another thing that is very peculiar in this area is that the initial\n>> assertion in the second stanza allows the case of result == TM_Deleted.\n\n> In this case, isn't it clearly required to accept TM_Deleted? The HTSU\n> after l1: obviously can return that, no?\n\nWell, that's the question. If it is required, then we need to fix the\ncode in that stanza to not be trying to chase a bogus next-tid link.\n\n> I wonder if we shouldn't just change the crosscheck case to set\n> something other than TM_Updated, as it's not really accurate to say the\n> tuple was updated.\n\nYeah, I was wondering about giving that a new result code, too.\nIt would be a little bit invasive and not at all back-patchable,\nbut (say) TM_SerializationViolation seems like a cleaner output ---\nand we could define it to not return any TID info, as TM_Delete\ndoesn't. But we also need a fix that *can* be back-patched.\n\n>> 2. Does the following stanza actually need to allow for the case\n>> of a deleted tuple as well as an updated one? If so, blindly\n>> trying to follow the ctid link is not so cool.\n\n> Which ctid link following are you referring to? I'm not sure I quite\n> follow.\n\nThe stanza right after the crosscheck one has always assumed that\nit is dealing with an outdated-by-update tuple, so that chasing up\nto the next TID is a sane thing to do. Maybe that's been wrong\nsince day one, or maybe we made it wrong somewhere along the line,\nbut it seems clearly wrong now (even without accounting for the\nserialization crosscheck case). I think it just accidentally\ndoesn't cause problems in non-assert builds, as long as nobody\nis assuming too much about which TID or XID gets returned.\n\n> But for the crosscheck case (most of?)\n> those wouldn't ever reach the ctid equivalency check, because of\n> + if (IsolationUsesXactSnapshot())\n> + ereport(ERROR,\n> + (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n> + errmsg(\"could not serialize access due to concurrent update\")));\n> like checks beforehand.\n\nYeah. For HEAD I'm imagining that callers would just throw\nERRCODE_T_R_SERIALIZATION_FAILURE unconditionally if they\nget back TM_SerializationViolation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Mar 2020 18:30:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" }, { "msg_contents": "Hi,\n\nOn 2020-03-22 18:30:04 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I wonder if we shouldn't just change the crosscheck case to set\n> > something other than TM_Updated, as it's not really accurate to say the\n> > tuple was updated.\n>\n> Yeah, I was wondering about giving that a new result code, too.\n> It would be a little bit invasive and not at all back-patchable,\n> but (say) TM_SerializationViolation seems like a cleaner output ---\n> and we could define it to not return any TID info, as TM_Delete\n> doesn't. But we also need a fix that *can* be back-patched.\n\nI wonder if returning TM_Invisible in the backbranches would be the\nright answer. \"The affected tuple wasn't visible to the relevant\nsnapshot\" is not the worst description for a crosscheck snapshot\nviolation.\n\nWe would have to start issuing a better error message for it, as it's\nunexpected in most places right now. It'd be kind of odd for\nheap_delete/update() to error out with \"attempted to delete invisible\ntuple\" but still return it in some cases, though.\n\n\n> >> 2. Does the following stanza actually need to allow for the case\n> >> of a deleted tuple as well as an updated one? If so, blindly\n> >> trying to follow the ctid link is not so cool.\n>\n> > Which ctid link following are you referring to? I'm not sure I quite\n> > follow.\n>\n> The stanza right after the crosscheck one has always assumed that\n> it is dealing with an outdated-by-update tuple, so that chasing up\n> to the next TID is a sane thing to do.\n\nDo you just mean the\n\t\ttmfd->ctid = tp.t_data->t_ctid;\nand\n\t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n? I would not have described that as chasing, which would explain my\ndifficulty following.\n\n\n> Maybe that's been wrong\n> since day one, or maybe we made it wrong somewhere along the line,\n> but it seems clearly wrong now (even without accounting for the\n> serialization crosscheck case). I think it just accidentally\n> doesn't cause problems in non-assert builds, as long as nobody\n> is assuming too much about which TID or XID gets returned.\n\nIt'd probably be worthwhile to rejigger those branches to something\nroughly like:\n\n\tif (result != TM_Ok)\n\t{\n\t\tswitch (result)\n\t\t{\n\t\t\tcase TM_SelfModified:\n\t\t\t\tAssert(!ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid));\n\t\t\t\ttmfd->ctid = tp.t_data->t_ctid;\n\t\t\t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n\t\t\t\ttmfd->cmax = HeapTupleHeaderGetCmax(tp.t_data);\n\t\t\t\tbreak;\n\t\t\tcase TM_Updated:\n\t\t\t\tAssert(!ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid));\n\t\t\t\ttmfd->ctid = tp.t_data->t_ctid;\n\t\t\t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n\t\t\t\ttmfd->cmax = InvalidCommandId;\n\t\t\t\tbreak;\n\t\t\tcase TM_Deleted:\n\t\t\t\ttmfd->ctid = tp.t_self;\n\t\t\t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n\t\t\t\ttmfd->cmax = InvalidCommandId;\n\t\t\t\tbreak;\n\t\t\tcase TM_BeingModified:\n\t\t\t\tAssert(!ItemPointerEquals(&tp.t_self, &tp.t_data->t_ctid));\n\t\t\t\ttmfd->ctid = tp.t_data->t_ctid;\n\t\t\t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n\t\t\t\ttmfd->cmax = InvalidCommandId;\n\t\t\t\tbreak;\n\t\t\tdefault:\n\t\t\t\telog(ERROR, \"unexpected visibility %d\", result);\n\t\t}\n\n\t\tUnlockReleaseBuffer(buffer);\n\t\tif (have_tuple_lock)\n\t\t\tUnlockTupleTuplock(relation, &(tp.t_self), LockTupleExclusive);\n\t\tif (vmbuffer != InvalidBuffer)\n\t\t\tReleaseBuffer(vmbuffer);\n\t\treturn result;\n\t}\n\n\n> > But for the crosscheck case (most of?)\n> > those wouldn't ever reach the ctid equivalency check, because of\n> > + if (IsolationUsesXactSnapshot())\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_T_R_SERIALIZATION_FAILURE),\n> > + errmsg(\"could not serialize access due to concurrent update\")));\n> > like checks beforehand.\n>\n> Yeah. For HEAD I'm imagining that callers would just throw\n> ERRCODE_T_R_SERIALIZATION_FAILURE unconditionally if they\n> get back TM_SerializationViolation.\n\nI am wondering if the correct HEAD answer would be to somehow lift the\ncrosscheck logic out of heapam.c. ISTM that heapam.c is kind of too low\nlevel for that type of concern. But it seems hard to do that without\nperforming unnecessary updates that immediately are thrown away by\nthrowing an error at a higher level.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:24:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-22 18:30:04 -0400, Tom Lane wrote:\n>> Yeah, I was wondering about giving that a new result code, too.\n>> It would be a little bit invasive and not at all back-patchable,\n>> but (say) TM_SerializationViolation seems like a cleaner output ---\n>> and we could define it to not return any TID info, as TM_Delete\n>> doesn't. But we also need a fix that *can* be back-patched.\n\n> I wonder if returning TM_Invisible in the backbranches would be the\n> right answer. \"The affected tuple wasn't visible to the relevant\n> snapshot\" is not the worst description for a crosscheck snapshot\n> violation.\n\nI'm not for that. Abusing an existing result code is what got us\ninto this mess in the first place; abusing a different result code\nthan what we did in prior minor releases can only make things worse\nnot better. (Of course, if there's no external callers of these\nfunctions then we could get away with changing it ... but I'm not\nprepared to assume that, are you?)\n\nAlso it'd mean something quite different in the direct output of\nHeapTupleSatisfiesUpdate than what it would mean one level up,\nwhich is certain to cause confusion.\n\nI think the right thing in the back branches is to keep on returning\nthe \"Updated\" result code, but adjust the crosscheck code path so that\nthe TID that's sent back is always the tuple's own TID.\n\n>> The stanza right after the crosscheck one has always assumed that\n>> it is dealing with an outdated-by-update tuple, so that chasing up\n>> to the next TID is a sane thing to do.\n\n> Do you just mean the\n> \t\ttmfd->ctid = tp.t_data->t_ctid;\n> and\n> \t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n> ? I would not have described that as chasing, which would explain my\n> difficulty following.\n\nThe bit about returning t_ctid rather than the tuple's own TID seems\nlike chasing a next-tid link to me. The Assert about xmax being valid\nis certainly intended to verify that that's what is happening. I can't\nspeak to what the code thinks it's returning in tmfd->xmax, because\nthat functionality wasn't there the last time I studied this.\n\n> I am wondering if the correct HEAD answer would be to somehow lift the\n> crosscheck logic out of heapam.c. ISTM that heapam.c is kind of too low\n> level for that type of concern. But it seems hard to do that without\n> performing unnecessary updates that immediately are thrown away by\n> throwing an error at a higher level.\n\nYeah, I'd be the first to agree that the crosscheck stuff is messy.\nBut it's hard to see how to do better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 13:54:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" }, { "msg_contents": "Hi,\n\nOn 2020-03-23 13:54:31 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-22 18:30:04 -0400, Tom Lane wrote:\n> >> Yeah, I was wondering about giving that a new result code, too.\n> >> It would be a little bit invasive and not at all back-patchable,\n> >> but (say) TM_SerializationViolation seems like a cleaner output ---\n> >> and we could define it to not return any TID info, as TM_Delete\n> >> doesn't. But we also need a fix that *can* be back-patched.\n> \n> > I wonder if returning TM_Invisible in the backbranches would be the\n> > right answer. \"The affected tuple wasn't visible to the relevant\n> > snapshot\" is not the worst description for a crosscheck snapshot\n> > violation.\n> \n> I'm not for that. Abusing an existing result code is what got us\n> into this mess in the first place; abusing a different result code\n> than what we did in prior minor releases can only make things worse\n> not better.\n\nWell, I think TM_Invisible is much less error prone - callers wouldn't\ntry to chase ctid chains or such. There's really not much they can do\nexcept to error out. It also seems semantically be a better match than\nTM_Updated.\n\n\n> Of course, if there's no external callers of these\n> functions then we could get away with changing it ... but I'm not\n> prepared to assume that, are you?)\n\nI'm certain we can't assume that.\n\n\n> I think the right thing in the back branches is to keep on returning\n> the \"Updated\" result code, but adjust the crosscheck code path so that\n> the TID that's sent back is always the tuple's own TID.\n\nIf we do that, and set xmax to InvalidTransactionId, I think we'd likely\nprevent any \"wrong\" chaining from outside heapam, since that already\nneeded to check xmax to be correct.\n\n\n> >> The stanza right after the crosscheck one has always assumed that\n> >> it is dealing with an outdated-by-update tuple, so that chasing up\n> >> to the next TID is a sane thing to do.> \n> > Do you just mean the\n> > \t\ttmfd->ctid = tp.t_data->t_ctid;\n> > and\n> > \t\ttmfd->xmax = HeapTupleHeaderGetUpdateXid(tp.t_data);\n> > ? I would not have described that as chasing, which would explain my\n> > difficulty following.\n> \n> The bit about returning t_ctid rather than the tuple's own TID seems\n> like chasing a next-tid link to me. The Assert about xmax being valid\n> is certainly intended to verify that that's what is happening. I can't\n> speak to what the code thinks it's returning in tmfd->xmax, because\n> that functionality wasn't there the last time I studied this.\n\nWe could probably get rid of tmfd->xmax. It kind of was introduced as\npart of\n\ncommit 6868ed7491b7ea7f0af6133bb66566a2f5fe5a75\nAuthor: Kevin Grittner <kgrittn@postgresql.org>\nDate: 2012-10-26 14:55:36 -0500\n\n Throw error if expiring tuple is again updated or deleted.\n\nIt did previously exist as an explicit heap_delete/heap_update\nparameter, however - and has for a long time:\n\ncommit f57e3f4cf36f3fdd89cae8d566479ad747809b2f\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2005-08-20 00:40:32 +0000\n\n Repair problems with VACUUM destroying t_ctid chains too soon, and with\n insufficient paranoia in code that follows t_ctid links. (We must do both\n\n\nSince Kevin's commit the relevant logic has since been pushed down into\nthe AM layer as part of tableam. While still used in one place\ninternally inside heap AM (basically where the check from Kevin's commit\ncommit has been moved), but that looks trivial to solve differently.\n\n\n> > I am wondering if the correct HEAD answer would be to somehow lift the\n> > crosscheck logic out of heapam.c. ISTM that heapam.c is kind of too low\n> > level for that type of concern. But it seems hard to do that without\n> > performing unnecessary updates that immediately are thrown away by\n> > throwing an error at a higher level.\n> \n> Yeah, I'd be the first to agree that the crosscheck stuff is messy.\n> But it's hard to see how to do better.\n\nObviously not a short-term solution at all: I wonder if some of this\nstems from implementing RI triggers partially in SQL, even though we\nrequire semantics that aren't efficiently doable in SQL. With the\nconsequences of having RI specific code in various parts of the executor\nand AMs, and of needing a lot more memory etc.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:34:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Assert() failures during RI checks" } ]
[ { "msg_contents": "Hi PostgreSQL team,\nI am looking forward to participating in the GSoC with PostgreSQL this\nsummer. Below is my draft proposal for your review. Any feedback would be\ngreatly appreciated.\n\n*PL/Java online documentation improvements*\n\n\n\n*Project goal:*\n\nThe goal of the project is to improve the PL/JAVA online documentation\nwebsite in terms of appearance, usability, information showcase and\nfunctionality.\n\n\n\n*Project deliverables:*\n\n1) A PL/JAVA documentation website having four major categories ( Usage,\nrelease information, modules and project documentation).\n\n2) Auto generated documentation pages for the above-mentioned sections.\n\n3) A page showing documentation tree for more than one release of\nPL/JAVA.\n\n4) A custom package which uses maven-site-plugin to generate new\nwebpages whenever required.\n\n5) A style (CSS) framework class that has all the PostgreSQL style\nconfigurations ready to use.\n\n6) An additional template for about us page or any additional\ninformation required based on velocity template.\n\n\n\n*Project implementation plan:*\n\nThe project can be divided into three phases as follows:\n\n1) *CSS framework class and Velocity templates:*\n\nAnalyze the existing PostgreSQL documentation website and identify all the\nstyle including typography, color, theme, headers and footers. Create a CSS\nclass file having the color scheme, footer and header styles, font styles\nand text sizes. Use existing velocity engine code to accommodate new\nrelease documentation.\n\n\n\n2) *Maven-site-plugin package with custom skins:*\n\nSince the PL/Java documentation should look like the original PostgreSQL\ndocumentation, create a template file and a new skin using the\nmaven-site-plugin from apache. Document the template file configurations\nand skin styles. Create a multi-module site menu (\nhttps://maven.apache.org/plugins/maven-site-plugin/examples/multimodule.html)\nthat would accommodate multiple releases of PL/Java. The sub-files will\ninherit the site URL from parent POM. The new pages containing the newer\nrelease information would inherit data from JavaDoc using Velocity.\n\n\n\n3) *Documentation and Testing:*\n\nDocument the new documentation tree menu developed using maven-site-plugin\nand the skin. Create documentation for the CSS style class and velocity\nplugin changes.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n*Timeline:*\n\nThis week-by-week timeline provides a rough guideline of how the project\nwill be done.\n\n\n\nMay 4 – May 16\n\nFamiliarize with website and velocity. Start with simple CSS framework\nclass to clone the original PostgreSQL documentation website styles and\nthemes.\n\n\n\nMay 17 – May 30\n\nStart writing the CSS framework class and a new routine to import new\nrelease data using maven-site-plugin and velocity. Maintain a configuration\nfile to track the progress.\n\n\n\n\n\nMay 31 – June 5\n\nMake minor changes to the existing Pl/JAVA documentation website template\nto visualize the improvements. Start implementing multi-module menu changes\nin the website. Add new skin using maven-site-plugin skins (which will use\nstyles from the CSS framework class).\n\n\n\nJune 10 – June 25\n\nCreate a dummy data file to test the documentation trees multi-module menu.\nThis should include at least two dummy release data to test the page\naesthetics and user experience. Conduct feedback survey to analyze the new\nlook of the website and the user experience.\n\n\n\nJuly 1 – July 10\n\nImprove minor typography issues and color schemes across the website and\nits pages. Document the test routines and the website code base. Also\ndocument the skin and style configuration used.\n\n\n\n\n\n\n\n*About me :*\n\nI am a computer science graduate student at Arizona State University\npursuing projects in Node.JS, MySQL and react-native. I am part of the\nGoogle developer’s club here where I volunteer to contribute on several of\ntheir projects including some applications that help students find the\nright professor using HTML, CSS, Velocity and GitHub pages. I have a\ncombined experience of 5 years developing and deploying websites using\nseveral frameworks including Vue.JS, Reast.JS and site generators. I have\nused PostgreSQL in some of my projects that involve analysis and querying\nof geo-spatial data and hence I would like to contribute to this project.\n\n\n\n\n\n\n\n[image: Mailtrack]\n<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&>\nSender\nnotified by\nMailtrack\n<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality5&>\n03/17/20,\n10:11:21 PM\n\nHi PostgreSQL team,I am looking forward to participating in the GSoC with PostgreSQL this summer. Below is my draft proposal for your review. Any feedback would be greatly appreciated. PL/Java online\ndocumentation improvements\n \nProject goal:\nThe goal of the project is to improve the PL/JAVA online\ndocumentation website in terms of appearance, usability, information showcase\nand functionality. \n \nProject deliverables:\n1)   \nA PL/JAVA documentation website having four\nmajor categories ( Usage, release information, modules and project\ndocumentation).\n2)   \nAuto generated documentation pages for the above-mentioned\nsections.\n3)   \nA page showing documentation tree for more than\none release of PL/JAVA.\n4)   \nA custom package which uses maven-site-plugin to\ngenerate new webpages whenever required.\n5)   \nA style (CSS) framework class that has all the\nPostgreSQL style configurations ready to use. \n6)   \nAn additional template for about us page or any\nadditional information required based on velocity template.\n \nProject implementation plan:\nThe project can be divided into three phases as follows:\n1)   \nCSS framework class and Velocity templates:\nAnalyze the existing\nPostgreSQL documentation website and identify all the style including\ntypography, color, theme, headers and footers. Create a CSS class file having the\ncolor scheme, footer and header styles, font styles and text sizes. Use\nexisting velocity engine code to accommodate new release documentation. \n \n2)   \nMaven-site-plugin package with custom skins:\nSince the PL/Java\ndocumentation should look like the original PostgreSQL documentation, create a\ntemplate file and a new skin using the maven-site-plugin from apache. Document\nthe template file configurations and skin styles. Create a multi-module site menu\n(https://maven.apache.org/plugins/maven-site-plugin/examples/multimodule.html)\nthat would accommodate multiple releases of PL/Java.  The sub-files will inherit the site URL from\nparent POM. The new pages containing the newer release information would\ninherit data from JavaDoc using Velocity. \n \n3)    Documentation\nand Testing:\nDocument the new\ndocumentation tree menu developed using maven-site-plugin and the skin. Create documentation\nfor the CSS style class and velocity plugin changes. \n \n \n \n \n \n \n \nTimeline:\nThis week-by-week timeline\nprovides a rough guideline of how the project will be done.\n \nMay 4 – May 16\nFamiliarize with website and\nvelocity. Start with simple CSS framework class to clone the original PostgreSQL\ndocumentation website styles and themes. \n \nMay 17 – May 30\nStart writing the CSS\nframework class and a new routine to import new release data using maven-site-plugin\nand velocity. Maintain a configuration file to track the progress. \n \n \nMay 31 – June 5\nMake minor changes to the existing\nPl/JAVA documentation website template to visualize the improvements. Start\nimplementing multi-module menu changes in the website. Add new skin using\nmaven-site-plugin skins (which will use styles from the CSS framework class).\n \nJune 10 – June 25\nCreate a dummy data file to\ntest the documentation trees multi-module menu. This should include at least\ntwo dummy release data to test the page aesthetics and user experience. Conduct\nfeedback survey to analyze the new look of the website and the user experience.\n\n \nJuly 1 – July 10\nImprove minor typography\nissues and color schemes across the website and its pages. Document the test\nroutines and the website code base. Also document the skin and style\nconfiguration used. \n \n \n \nAbout me :\nI\nam a computer science graduate student at Arizona State University pursuing\nprojects in Node.JS, MySQL and react-native. I am part of the Google developer’s\nclub here where I volunteer to contribute on several of their projects\nincluding some applications that help students find the right professor using\nHTML, CSS, Velocity and GitHub pages.  I\nhave a combined experience of 5 years developing and deploying websites using\nseveral frameworks including Vue.JS, Reast.JS and site generators. I have used\nPostgreSQL in some of my projects that involve analysis and querying of\ngeo-spatial data and hence I would like to contribute to this project. \n \n\n\n\n\n\n\n\n\nSender notified by \nMailtrack\n03/17/20, 10:11:21 PM", "msg_date": "Tue, 17 Mar 2020 22:11:31 -0700", "msg_from": "\"p.b uday\" <uday.pb26@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC applicant proposal, Uday PB" }, { "msg_contents": "Hi,\n\nOn 3/18/20 1:11 AM, p.b uday wrote:\n> summer. Below is my draft proposal for your review. Any feedback would be\n> greatly appreciated.\n> ...\n> The goal of the project is to improve the PL/JAVA online documentation\n> website in terms of appearance, usability, information showcase and\n> functionality.\n\nThanks for your interest! There is plenty of room for the existing\ndocumentation to be improved.\n\n> May 4 – May 16\n> \n> Familiarize with website and velocity. Start with simple CSS framework\n> class to clone the original PostgreSQL documentation website styles and\n> themes.\n\nI should perhaps clarify that I don't think it's essential for the\nPL/Java docs to exactly clone the styles of PostgreSQL or of another\nproject. I would be happy to achieve something more like a family\nresemblance, as you see, for example, between postgresql.org and\njdbc.postgresql.org. There are many differences, and the shades of\nblue aren't the same, but you could believe they are related projects.\nPL/Java's looks very different, only because it's currently using\nthe default out-of-the-box Maven styles with almost zero adjustments.\n\nThe PL/Java doc sources are less like PostgreSQL's (which are\nDocBook XML) and more like PgJDBC's (both use Markdown, but with\ndifferent toolchains). I think changing the markup language would\nbe out of scope; Markdown seems adequate, and personally I find it\nless tiring to write. It would be nice to get it to generate tables\nof contents based on existing headings.\n\nFurther discussion should probably migrate to the pljava-dev list\nbut that seems to be wedged at the moment; I'll look into that\nand report back. Meanwhile let's be sparing in use of pgsql-hackers\n(the final commitfest for PostgreSQL 13 is going on this month).\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 19 Mar 2020 10:19:51 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "Hi!\n\nOn Wed, Mar 18, 2020 at 8:13 AM p.b uday <uday.pb26@gmail.com> wrote:\n> Hi PostgreSQL team,\n> I am looking forward to participating in the GSoC with PostgreSQL this summer. Below is my draft proposal for your review. Any feedback would be greatly appreciated.\n>\n> PL/Java online documentation improvements\n\nDoes your project imply any coding? AFAIR, GSoC doesn't allow pure\ndocumentation projects.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 21:03:58 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "Greetings,\n\n* Alexander Korotkov (a.korotkov@postgrespro.ru) wrote:\n> On Wed, Mar 18, 2020 at 8:13 AM p.b uday <uday.pb26@gmail.com> wrote:\n> > Hi PostgreSQL team,\n> > I am looking forward to participating in the GSoC with PostgreSQL this summer. Below is my draft proposal for your review. Any feedback would be greatly appreciated.\n> >\n> > PL/Java online documentation improvements\n> \n> Does your project imply any coding? AFAIR, GSoC doesn't allow pure\n> documentation projects.\n\nYeah, I was just sending an email to Chapman regarding that.\n\nThere is a \"Google Season of Docs\" that happened last year in the fall\nthat this would be more appropriate for.\n\nThanks,\n\nStephen", "msg_date": "Thu, 19 Mar 2020 14:08:55 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "On 3/19/20 2:03 PM, Alexander Korotkov wrote:\n\n> Does your project imply any coding? AFAIR, GSoC doesn't allow pure\n> documentation projects.\n\nThat's a good question. The idea as I proposed it is more of an\ninfrastructure project, adjusting the toolchain that currently\nautogenerates the docs (along with some stylesheets/templates) so\nthat a more usable web reference is generated from the existing\ndocumentation—and to make it capable of generating per-version\nsubtrees, as the PostgreSQL manual does, rather than having the\nmost recent release be the only online reference available.\n\nI was not envisioning it as a technical-writing project to improve\nthe content of the documentation. That surely wouldn't hurt, but\nisn't what I had in mind here.\n\nI am open to withdrawing it and reposting as a Google Season of Docs\nproject if that's what the community prefers, only in that case\nI wonder if it would end up attracting contributors who would be\nexpecting to do some writing and copy-editing, and end up intimidated\nby the coding/infrastructure work required.\n\nSo I'm not certain how it should be categorized, or whether GSoC\nrules should preclude it. Judgment call?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:20:59 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "Greetings,\n\n* Chapman Flack (chap@anastigmatix.net) wrote:\n> On 3/19/20 2:03 PM, Alexander Korotkov wrote:\n> > Does your project imply any coding? AFAIR, GSoC doesn't allow pure\n> > documentation projects.\n> \n> That's a good question. The idea as I proposed it is more of an\n> infrastructure project, adjusting the toolchain that currently\n> autogenerates the docs (along with some stylesheets/templates) so\n> that a more usable web reference is generated from the existing\n> documentation—and to make it capable of generating per-version\n> subtrees, as the PostgreSQL manual does, rather than having the\n> most recent release be the only online reference available.\n> \n> I was not envisioning it as a technical-writing project to improve\n> the content of the documentation. That surely wouldn't hurt, but\n> isn't what I had in mind here.\n> \n> I am open to withdrawing it and reposting as a Google Season of Docs\n> project if that's what the community prefers, only in that case\n> I wonder if it would end up attracting contributors who would be\n> expecting to do some writing and copy-editing, and end up intimidated\n> by the coding/infrastructure work required.\n\nI appreciate that it might not be a great fit for GSoD either, but that\ndoesn't mean it fits as a GSoC project.\n\n> So I'm not certain how it should be categorized, or whether GSoC\n> rules should preclude it. Judgment call?\n\nYou could ask on the GSoC mentors list, but I feel pretty confident that\nthis doesn't meet the criteria to be a GSoC project, unfortunately. The\nGSoC folks are pretty clear that GSoC is for writing OSS code, not for\npulling together tools with shell scripts, or for writing documentation\nor for systems administration or for other things, even if those other\nthings are things that an OSS project needs.\n\nThanks,\n\nStephen", "msg_date": "Thu, 19 Mar 2020 15:31:18 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "On 03/19/20 15:31, Stephen Frost wrote:\n>> So I'm not certain how it should be categorized, or whether GSoC\n>> rules should preclude it. Judgment call?\n>\n> You could ask on the GSoC mentors list, but I feel pretty confident that\n> this doesn't meet the criteria to be a GSoC project, unfortunately.\n\nIt did produce a useful thread on the mentors list, with views like\n\"while GSOC project doesn't allow project about *WRITING* documentation,\nI don't see any reason why coding some solution which improves your\nrelease cycles for eternity wouldn't qualify as a valid GSOC project\" and\n\"if it's just writing a few scripts that will take a week and the remainder\nis documenting that work, I'm fairly certain that doesn't count as a valid\nGSoC. Now if it's code-heavy -- then definitely that counts as a valid\nGSoC project.\"\n\nAt the same time, an off-list response added \"However if the org admins\ndon't like the project, or prefer other projects instead of it, the\nproject has low chances of being selected\" and, given that there doesn't\nseem to be clear agreement, and the organization surely will have more\nworthy projects than slots will be available for, it is probably best\nthat I withdraw this one for now.\n\np.b uday, thank you for your interest and the time spent on your\nproposal; I'm sorry to be withdrawing this project. I hope you might\ncontinue to have interest in PostgreSQL generally or PL/Java specifically\nin the future.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:30:46 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" }, { "msg_contents": "> -------Original Message-------\n> From: Chapman Flack <chap@anastigmatix.net>\n> \n> p.b uday, thank you for your interest and the time spent on your\n> proposal; I'm sorry to be withdrawing this project. I hope you might\n> continue to have interest in PostgreSQL generally or PL/Java specifically\n> in the future.\n> \n> Regards,\n> -Chap\n> \n\nUday how about starting work on your proposal and claimed interest in PostgreSQL without worrying about the GSoC money ?\n\n\n", "msg_date": "Sat, 21 Mar 2020 20:06:41 +0530", "msg_from": "\"=?utf-8?Q?inout?=\" <inout@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "=?utf-8?B?UmU6IEdTb0MgYXBwbGljYW50IHByb3Bvc2FsLCBVZGF5IFBC?=" }, { "msg_contents": "On 03/21/20 10:36, inout wrote:\n\n> Uday how about starting work on your proposal and claimed interest in\n> PostgreSQL without worrying about the GSoC money ?\n\nNaturally I'd be pleased with that outcome, but it's a purely personal\nand situational decision that I wouldn't presume to press.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sat, 21 Mar 2020 10:54:07 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: GSoC applicant proposal, Uday PB" } ]
[ { "msg_contents": "Hello,\n\nI've been trying to play with support functions for a use-case of ours, and\nfound some behaviors I don't know are expected or not.\n\nThe use case: we have some complicated queries, where whole-branches of the\nexecution tree could be cut if some things were evaluated at planning time.\nTake the following simplified example:\n\nCREATE OR REPLACE FUNCTION myfunction(t1_id int) AS $$\nSELECT *\nFROM sometable t1\nJOIN sometable t2 on t2.t1_id = t1.id\nWHERE id = t1_id AND t1.somecolumn IS NOT NULL\n$$ language SQL;\n\nIf I were to incorporate this function in a larger query, the planner will\nchoose a plan based on a generic value of t1_id and may estimate a large\nrowcount after inlining.\n\nWhat I want to do is to evaluate whether id = t1_id AND somecolumn is NOT\nNULL at planification time, and replace the function by another one if this\ncan be pruned altogether.\n\nSo, what I've been doing is to implement a support function for\nSupportRequestSimplify, and If the predicate doesn't match any row, replace\nthe FuncExpr by a new one, calling a different function.\n\nThis seems to work great, but I have several questions:\n\n1) Is it valid to make SPI calls in a support function to do this kind of\nsimplification ?\n\n2) My new FuncExpr doesn't get inlined. This is because in\ninline_set_returning_function, we check that after the call to\neval_const_expressions we still call the same function. I think it would be\nbetter to first simplify the function if we can, and only then record the\nfunction oid and call the rest of the machinery. I tested that naively by\ncalling eval_const_expressions early in inline_set_returning_function and\nit seems to do the trick. A proper patch would likely only call the support\nfunction at this stage.\n\nWhat do you think ?\n\n-- \n\n\n\n\nThis e-mail message and any attachments to it are intended only for the \nnamed recipients and may contain legally privileged and/or confidential \ninformation. If you are not one of the intended recipients, do not \nduplicate or forward this e-mail message.\n\nHello,I've been trying to play with support functions for a use-case of ours, and found some behaviors I don't know are expected or not.The use case: we have some complicated queries, where whole-branches of the execution tree could be cut if some things were evaluated at planning time. Take the following simplified example:CREATE OR REPLACE FUNCTION myfunction(t1_id int) AS $$SELECT *FROM sometable t1JOIN sometable t2 on t2.t1_id = t1.idWHERE id = t1_id AND t1.somecolumn IS NOT NULL$$ language SQL;If I were to incorporate this function in a larger query, the planner will choose a plan based on a generic value of t1_id and may estimate a large rowcount after inlining. What I want to do is to evaluate whether id = t1_id AND somecolumn is NOT NULL at planification time, and replace the function by another one if this can be pruned altogether.So, what I've been doing is to implement a support function for SupportRequestSimplify, and If the predicate doesn't match any row, replace the FuncExpr by a new one, calling a different function.This seems to work great, but I have several questions:1) Is it valid to make SPI calls in a support function to do this kind of simplification ?2) My new FuncExpr doesn't get inlined. This is because in inline_set_returning_function, we check that after the call to eval_const_expressions we still call the same function. I think it would be better to first simplify the function if we can, and only then record the function oid and call the rest of the machinery. I tested that naively by calling eval_const_expressions early in inline_set_returning_function and it seems to do the trick. A proper patch would likely only call the support function at this stage.What do you think ?\n\nThis e-mail message and any attachments to it are intended only for the named recipients and may contain legally privileged and/or confidential information. If you are not one of the intended recipients, do not duplicate or forward this e-mail message.", "msg_date": "Wed, 18 Mar 2020 10:39:10 +0100", "msg_from": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com>", "msg_from_op": true, "msg_subject": "SupportRequestSimplify and SQL SRF" }, { "msg_contents": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com> writes:\n> What I want to do is to evaluate whether id = t1_id AND somecolumn is NOT\n> NULL at planification time, and replace the function by another one if this\n> can be pruned altogether.\n\nHm. There was never really any expectation that support functions\nwould be attached to PL functions --- since you have to write the\nformer in C, it seems a little odd for the supported function not\nto also be C. Perhaps more to the point though, what simplification\nknowledge is this support function bringing to bear that the planner\nhasn't already got? It kinda feels like you are trying to solve\nthis in the wrong place.\n\n> So, what I've been doing is to implement a support function for\n> SupportRequestSimplify, and If the predicate doesn't match any row, replace\n> the FuncExpr by a new one, calling a different function.\n\nI'm confused. I don't see any SupportRequestSimplify call at all in the\ncode path for set-returning functions. Maybe there should be one,\nbut there is not.\n\n> This seems to work great, but I have several questions:\n\n> 1) Is it valid to make SPI calls in a support function to do this kind of\n> simplification ?\n\nHmm, a bit scary maybe but we don't hesitate to const-simplify\nfunctions that could contain SPI calls, so I don't see a big\nproblem in that aspect. I'd be more worried, if you're executing\nsome random SQL that way, about whether the SQL reliably does what\nyou want (in the face of variable search_path and the like).\n\n> 2) My new FuncExpr doesn't get inlined. This is because in\n> inline_set_returning_function, we check that after the call to\n> eval_const_expressions we still call the same function.\n\nUh, what? I didn't check the back branches, but I see nothing\nremotely like that in HEAD.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Mar 2020 09:59:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SupportRequestSimplify and SQL SRF" }, { "msg_contents": "> Hm. There was never really any expectation that support functions\n> would be attached to PL functions --- since you have to write the\n> former in C, it seems a little odd for the supported function not\n> to also be C. Perhaps more to the point though, what simplification\n> knowledge is this support function bringing to bear that the planner\n> hasn't already got? It kinda feels like you are trying to solve\n> this in the wrong place.\n>\n\nSome optimization aren't done by the planner, and could be added easily\nthat way.\n\nFor example, the following query would have wrong estimates if the planner\ncan't inject inferred values:\n\nSELECT t2.*\nFROM t1 JOIN t2 ON t1.id = t2.t1_id\nWHERE t1.code = ? AND t1.col1 IS NOT NULL\nUNION\nSELECT t3.*\nFROM t1 JOIN t3 ON t1.id = t3.t1_id\nWHERE t1.code = ? AND t1.col1 IS NULL\n\nAt any given time, only one of those branch will be evaluated. I can either\nwrite a PL function which will force me to abandon all the benefits of\ninlining with regards to cost estimation, or keep it in SQL and fall back\non a generic plan, which will evaluate an average number of rows for both\ncases.\nWith support functions, I was hoping to replace a function containing the\nabove query to another depending of the matched t1 record: if col1 is NULL,\nthen query directly t2 else query directly t3. By injecting the value\ndirectly when we know we have only one row (unique constraint on t1.code)\nwe can optimize the whole thing away, and have sensible estimates based on\nthe statistics of t1_id. But of course, I need to be able to use SPI calls\nto inject the value...\n\nI'm not yet convinced it is a good idea either, but it is one I wanted to\nexperiment with.\nIn the more generic case, the planner could possibly perform those kind of\noptimizations if it was able to identify JOINs between one unique row and\nother relations. If we were to work on a patch like this, would it be\nsomething that could be of interest, perhaps hidden behind a GUC ?\n\n\n> I'm confused. I don't see any SupportRequestSimplify call at all in the\n> code path for set-returning functions. Maybe there should be one,\n> but there is not.\n>\n\nSorry, I should have checked on HEAD, I was working on REL_12_STABLE.\nThis simplification was done in eval_const_expressions, which in turn ended\nin calling simplify_function.\nI have not looked at the code thoroughly on HEAD, but a quick test shows\nthat it now does what I want and presumably simplifies it earlier.\n\n\n> > 1) Is it valid to make SPI calls in a support function to do this kind of\n> > simplification ?\n>\n> Hmm, a bit scary maybe but we don't hesitate to const-simplify\n> functions that could contain SPI calls, so I don't see a big\n> problem in that aspect. I'd be more worried, if you're executing\n> some random SQL that way, about whether the SQL reliably does what\n> you want (in the face of variable search_path and the like).\n>\n\nOk, I need to triple-check that, but that was my main worry.\n\n\n>\n> > 2) My new FuncExpr doesn't get inlined. This is because in\n> > inline_set_returning_function, we check that after the call to\n> > eval_const_expressions we still call the same function.\n>\n> Uh, what? I didn't check the back branches, but I see nothing\n> remotely like that in HEAD.\n>\n\nSorry again, I should have checked HEAD. The code is different on HEAD, and\nworks as expected: the replacement SRF ends up being inlined.\nAgain, thank you for your answer.\n\nBest regards,\n\n-- \n\n\n\n\nThis e-mail message and any attachments to it are intended only for the \nnamed recipients and may contain legally privileged and/or confidential \ninformation. If you are not one of the intended recipients, do not \nduplicate or forward this e-mail message.\n\n\nHm.  There was never really any expectation that support functions\nwould be attached to PL functions --- since you have to write the\nformer in C, it seems a little odd for the supported function not\nto also be C.  Perhaps more to the point though, what simplification\nknowledge is this support function bringing to bear that the planner\nhasn't already got?  It kinda feels like you are trying to solve\nthis in the wrong place.Some optimization aren't done by the planner, and could be added easily that way.For example, the following query would have wrong estimates if the planner can't inject inferred values:SELECT t2.*FROM t1 JOIN t2 ON t1.id = t2.t1_idWHERE t1.code = ? AND t1.col1 IS NOT NULLUNIONSELECT t3.*FROM t1 JOIN t3 ON t1.id = t3.t1_idWHERE t1.code = ? AND t1.col1 IS NULLAt any given time, only one of those branch will be evaluated. I can either write a PL function which will force me to abandon all the benefits of inlining with regards to cost estimation, or keep it in SQL and fall back on a generic plan, which will evaluate an average number of rows for both cases.With support functions, I was hoping to replace a function containing the above query to another depending of the matched t1 record: if col1 is NULL, then query directly t2 else query directly t3. By injecting the value directly when we know we have only one row (unique constraint on t1.code) we can optimize the whole thing away, and have sensible estimates based on the statistics of t1_id. But of course, I need to be able to use SPI calls to inject the value... I'm not yet convinced it is a good idea either, but it is one I wanted to experiment with.In the more generic case, the planner could possibly perform those kind of optimizations if it was able to identify JOINs between one unique row and other relations. If we were to work on a patch like this, would it be something that could be of interest, perhaps hidden behind a GUC ? \nI'm confused.  I don't see any SupportRequestSimplify call at all in the\ncode path for set-returning functions.  Maybe there should be one,\nbut there is not.Sorry, I should have checked on HEAD, I was working on REL_12_STABLE. This simplification was done in eval_const_expressions, which in turn ended in calling simplify_function.I have not looked at the code thoroughly on HEAD, but a quick test shows that it now does what I want and presumably simplifies it earlier. \n> 1) Is it valid to make SPI calls in a support function to do this kind of\n> simplification ?\n\nHmm, a bit scary maybe but we don't hesitate to const-simplify\nfunctions that could contain SPI calls, so I don't see a big\nproblem in that aspect.  I'd be more worried, if you're executing\nsome random SQL that way, about whether the SQL reliably does what\nyou want (in the face of variable search_path and the like).Ok, I need to triple-check that, but that was my main worry. \n\n> 2) My new FuncExpr doesn't get inlined. This is because in\n> inline_set_returning_function, we check that after the call to\n> eval_const_expressions we still call the same function.\n\nUh, what?  I didn't check the back branches, but I see nothing\nremotely like that in HEAD.Sorry again, I should have checked HEAD. The code is different on HEAD, and works as expected: the replacement SRF ends up being inlined.Again, thank you for your answer.Best regards,\n\nThis e-mail message and any attachments to it are intended only for the named recipients and may contain legally privileged and/or confidential information. If you are not one of the intended recipients, do not duplicate or forward this e-mail message.", "msg_date": "Wed, 18 Mar 2020 15:29:38 +0100", "msg_from": "Ronan Dunklau <ronan_dunklau@ultimatesoftware.com>", "msg_from_op": true, "msg_subject": "Re: SupportRequestSimplify and SQL SRF" } ]
[ { "msg_contents": "Hi,\n\nWhile looking at RIC for the collation versioning issue Michael raised earlier,\nI found a thinko in a nearby comment, apparently introduced in the original\nREINDEX CONCURRENTLY patch (5dc92b8). Trivial patch attached.", "msg_date": "Wed, 18 Mar 2020 15:33:40 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Thinko in index_concurrently_swap comment" }, { "msg_contents": "On Wed, Mar 18, 2020 at 03:33:40PM +0100, Julien Rouhaud wrote:\n> While looking at RIC for the collation versioning issue Michael raised earlier,\n> I found a thinko in a nearby comment, apparently introduced in the original\n> REINDEX CONCURRENTLY patch (5dc92b8). Trivial patch attached.\n\nThanks, Julien. Fixed as of d41202f.\n--\nMichael", "msg_date": "Thu, 19 Mar 2020 09:54:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Thinko in index_concurrently_swap comment" } ]
[ { "msg_contents": "When SaveSlotToPath() is called with elevel=LOG, the early exits don't \nrelease the slot's io_in_progress_lock. Fix attached.\n\nThis could result in a walsender being stuck on the lock forever. A \npossible way to get into this situation is if the offending code paths \nare triggered in a low disk space situation. (This is how it was found; \nmaybe there are other ways.)\n\nPavan Deolasee and Craig Ringer worked on this issue. I'm forwarding it \non their behalf.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 18 Mar 2020 16:46:23 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "Hi,\n\nOn 2020-03-18 16:46:23 +0100, Peter Eisentraut wrote:\n> When SaveSlotToPath() is called with elevel=LOG, the early exits don't\n> release the slot's io_in_progress_lock. Fix attached.\n\nI'm a bit confused as to why we we ever call it with elevel = LOG\n(i.e. why we have the elevel parameter at all). That seems to have been\nthere from the start, so it's either me or Robert that's to blame. But I\ncan't immediately see a reason for it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Mar 2020 11:45:42 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-Mar-18, Andres Freund wrote:\n\n> Hi,\n> \n> On 2020-03-18 16:46:23 +0100, Peter Eisentraut wrote:\n> > When SaveSlotToPath() is called with elevel=LOG, the early exits don't\n> > release the slot's io_in_progress_lock. Fix attached.\n> \n> I'm a bit confused as to why we we ever call it with elevel = LOG\n> (i.e. why we have the elevel parameter at all). That seems to have been\n> there from the start, so it's either me or Robert that's to blame. But I\n> can't immediately see a reason for it?\n\nI guess you didn't want failure to save a slot be a reason to abort a\ncheckpoint.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 16:54:19 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Thu, Mar 19, 2020 at 4:46 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> [patch]\n\n+ * releaseing even in that case.\n\nTypo.\n\n\n", "msg_date": "Thu, 19 Mar 2020 09:13:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "Hi,\n\nOn 2020-03-18 16:54:19 -0300, Alvaro Herrera wrote:\n> On 2020-Mar-18, Andres Freund wrote:\n> > On 2020-03-18 16:46:23 +0100, Peter Eisentraut wrote:\n> > > When SaveSlotToPath() is called with elevel=LOG, the early exits don't\n> > > release the slot's io_in_progress_lock. Fix attached.\n> > \n> > I'm a bit confused as to why we we ever call it with elevel = LOG\n> > (i.e. why we have the elevel parameter at all). That seems to have been\n> > there from the start, so it's either me or Robert that's to blame. But I\n> > can't immediately see a reason for it?\n> \n> I guess you didn't want failure to save a slot be a reason to abort a\n> checkpoint.\n\nI don't see a valid reason for that though - if anything it's dangerous,\nbecause we're not persistently saving the slot. It should fail the\ncheckpoint imo. Robert, do you have an idea?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 18 Mar 2020 13:25:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Wed, Mar 18, 2020 at 4:25 PM Andres Freund <andres@anarazel.de> wrote:\n> I don't see a valid reason for that though - if anything it's dangerous,\n> because we're not persistently saving the slot. It should fail the\n> checkpoint imo. Robert, do you have an idea?\n\nWell, the comment atop SaveSlotToPath says:\n\n * This needn't actually be part of a checkpoint, but it's a convenient\n * location.\n\nAnd I agree with that.\n\nIncidentally, the wait-event handling in SaveSlotToPath() doesn't look\nright for the early-exit cases either.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 19 Mar 2020 11:38:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-03-19 16:38, Robert Haas wrote:\n> Incidentally, the wait-event handling in SaveSlotToPath() doesn't look\n> right for the early-exit cases either.\n\nThere appear to be appropriate pgstat_report_wait_end() calls. What are \nyou seeing?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 20 Mar 2020 16:32:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Fri, Mar 20, 2020 at 11:32 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> On 2020-03-19 16:38, Robert Haas wrote:\n> > Incidentally, the wait-event handling in SaveSlotToPath() doesn't look\n> > right for the early-exit cases either.\n>\n> There appear to be appropriate pgstat_report_wait_end() calls. What are\n> you seeing?\n\nOh, you're right. I think I got confused because the rename() and\nclose() don't have that, but those don't have a wait event set either.\nSorry for the noise.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 20 Mar 2020 11:38:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-03-20 16:38, Robert Haas wrote:\n> On Fri, Mar 20, 2020 at 11:32 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> On 2020-03-19 16:38, Robert Haas wrote:\n>>> Incidentally, the wait-event handling in SaveSlotToPath() doesn't look\n>>> right for the early-exit cases either.\n>>\n>> There appear to be appropriate pgstat_report_wait_end() calls. What are\n>> you seeing?\n> \n> Oh, you're right. I think I got confused because the rename() and\n> close() don't have that, but those don't have a wait event set either.\n> Sorry for the noise.\n\nAny concerns about applying and backpatching the patch I posted?\n\nThe talk about reorganizing this code doesn't seem very concrete at the \nmoment and would probably not be backpatch material anyway.\n\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 11:13:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-Mar-25, Peter Eisentraut wrote:\n\n> On 2020-03-20 16:38, Robert Haas wrote:\n> > On Fri, Mar 20, 2020 at 11:32 AM Peter Eisentraut\n> > <peter.eisentraut@2ndquadrant.com> wrote:\n> > > On 2020-03-19 16:38, Robert Haas wrote:\n> > > > Incidentally, the wait-event handling in SaveSlotToPath() doesn't look\n> > > > right for the early-exit cases either.\n> > > \n> > > There appear to be appropriate pgstat_report_wait_end() calls. What are\n> > > you seeing?\n> > \n> > Oh, you're right. I think I got confused because the rename() and\n> > close() don't have that, but those don't have a wait event set either.\n> > Sorry for the noise.\n> \n> Any concerns about applying and backpatching the patch I posted?\n\nIt looks a straight bug fix to me, I agree it should be back-patched.\n\n> The talk about reorganizing this code doesn't seem very concrete at the\n> moment and would probably not be backpatch material anyway.\n\nAgreed on both counts.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 10:41:13 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Wed, Mar 25, 2020 at 6:13 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> Any concerns about applying and backpatching the patch I posted?\n\nNot from me.\n\n> The talk about reorganizing this code doesn't seem very concrete at the\n> moment and would probably not be backpatch material anyway.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:56:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-03-25 17:56, Robert Haas wrote:\n> On Wed, Mar 25, 2020 at 6:13 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> Any concerns about applying and backpatching the patch I posted?\n> \n> Not from me.\n> \n>> The talk about reorganizing this code doesn't seem very concrete at the\n>> moment and would probably not be backpatch material anyway.\n> \n> +1.\n\ncommitted and backpatched\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 26 Mar 2020 14:16:05 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Thu, Mar 26, 2020 at 02:16:05PM +0100, Peter Eisentraut wrote:\n> committed and backpatched\n\nThe patch committed does that in three places:\n /* rename to permanent file, fsync file and directory */\n if (rename(tmppath, path) != 0)\n {\n+ LWLockRelease(&slot->io_in_progress_lock);\n\tereport(elevel,\n (errcode_for_file_access(),\n errmsg(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n\nBut why do you assume that LWLockRelease() never changes errno? It\nseems to me that you should save errno before calling LWLockRelease(),\nand then restore it back before using %m in the log message, no? See\nfor example the case where trace_lwlocks is set.\n--\nMichael", "msg_date": "Fri, 27 Mar 2020 16:48:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-03-27 08:48, Michael Paquier wrote:\n> On Thu, Mar 26, 2020 at 02:16:05PM +0100, Peter Eisentraut wrote:\n>> committed and backpatched\n> \n> The patch committed does that in three places:\n> /* rename to permanent file, fsync file and directory */\n> if (rename(tmppath, path) != 0)\n> {\n> + LWLockRelease(&slot->io_in_progress_lock);\n> \tereport(elevel,\n> (errcode_for_file_access(),\n> errmsg(\"could not rename file \\\"%s\\\" to \\\"%s\\\": %m\",\n> \n> But why do you assume that LWLockRelease() never changes errno? It\n> seems to me that you should save errno before calling LWLockRelease(),\n> and then restore it back before using %m in the log message, no? See\n> for example the case where trace_lwlocks is set.\n\nGood catch. How about the attached patch?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Wed, 1 Apr 2020 16:26:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On Wed, Apr 01, 2020 at 04:26:25PM +0200, Peter Eisentraut wrote:\n> Good catch. How about the attached patch?\n\nWFM. Another trick would be to call LWLockRelease() after generating\nthe log, but I find your patch more consistent with the surroundings.\n--\nMichael", "msg_date": "Thu, 2 Apr 2020 15:21:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" }, { "msg_contents": "On 2020-04-02 08:21, Michael Paquier wrote:\n> On Wed, Apr 01, 2020 at 04:26:25PM +0200, Peter Eisentraut wrote:\n>> Good catch. How about the attached patch?\n> \n> WFM. Another trick would be to call LWLockRelease() after generating\n> the log, but I find your patch more consistent with the surroundings.\n\ndone\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 5 Apr 2020 10:13:44 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: potential stuck lock in SaveSlotToPath()" } ]
[ { "msg_contents": "Disk-based Hash Aggregation.\n\nWhile performing hash aggregation, track memory usage when adding new\ngroups to a hash table. If the memory usage exceeds work_mem, enter\n\"spill mode\".\n\nIn spill mode, new groups are not created in the hash table(s), but\nexisting groups continue to be advanced if input tuples match. Tuples\nthat would cause a new group to be created are instead spilled to a\nlogical tape to be processed later.\n\nThe tuples are spilled in a partitioned fashion. When all tuples from\nthe outer plan are processed (either by advancing the group or\nspilling the tuple), finalize and emit the groups from the hash\ntable. Then, create new batches of work from the spilled partitions,\nand select one of the saved batches and process it (possibly spilling\nrecursively).\n\nAuthor: Jeff Davis\nReviewed-by: Tomas Vondra, Adam Lee, Justin Pryzby, Taylor Vesely, Melanie Plageman\nDiscussion: https://postgr.es/m/507ac540ec7c20136364b5272acbcd4574aa76ef.camel@j-davis.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/1f39bce021540fde00990af55b4432c55ef4b3c7\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 32 +\nsrc/backend/commands/explain.c | 37 +\nsrc/backend/executor/nodeAgg.c | 1092 ++++++++++++++++++++++++-\nsrc/backend/optimizer/path/costsize.c | 70 +-\nsrc/backend/optimizer/plan/planner.c | 19 +-\nsrc/backend/optimizer/prep/prepunion.c | 2 +-\nsrc/backend/optimizer/util/pathnode.c | 14 +-\nsrc/backend/utils/misc/guc.c | 20 +\nsrc/include/executor/nodeAgg.h | 8 +\nsrc/include/nodes/execnodes.h | 22 +-\nsrc/include/optimizer/cost.h | 4 +-\nsrc/test/regress/expected/aggregates.out | 184 +++++\nsrc/test/regress/expected/groupingsets.out | 122 +++\nsrc/test/regress/expected/select_distinct.out | 62 ++\nsrc/test/regress/expected/sysviews.out | 4 +-\nsrc/test/regress/sql/aggregates.sql | 131 +++\nsrc/test/regress/sql/groupingsets.sql | 103 +++\nsrc/test/regress/sql/select_distinct.sql | 62 ++\n18 files changed, 1950 insertions(+), 38 deletions(-)", "msg_date": "Wed, 18 Mar 2020 22:54:49 +0000", "msg_from": "Jeff Davis <jdavis@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Disk-based Hash Aggregation." }, { "msg_contents": "On 2020-Mar-18, Jeff Davis wrote:\n\n> Disk-based Hash Aggregation.\n> \n> While performing hash aggregation, track memory usage when adding new\n> groups to a hash table. If the memory usage exceeds work_mem, enter\n> \"spill mode\".\n\nKudos!!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 18 Mar 2020 20:24:04 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Disk-based Hash Aggregation." }, { "msg_contents": "Jeff Davis <jdavis@postgresql.org> writes:\n> Disk-based Hash Aggregation.\n\nI noticed that the regression tests seemed suddenly slower than they\nhave been. A bit of poking around reveals that this patch made\ngroupingsets.sql take approximately 8X longer than it used to,\nand more than twice as long as any other core regression test.\n\nThis is absolutely, positively, not acceptable for a test that gets\nrun hundreds of times a day by lots of people and buildfarm animals.\n\nIf there's no way to test the feature in some significantly-cheaper way,\nperhaps we should move this test out to a separate script that's not run\nby default.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 02:05:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Disk-based Hash Aggregation." }, { "msg_contents": "On Mon, 2020-03-23 at 02:05 -0400, Tom Lane wrote:\n> If there's no way to test the feature in some significantly-cheaper\n> way,\n> perhaps we should move this test out to a separate script that's not\n> run\n> by default.\n\nI'll rework the tests. I wanted to be a bit more aggressive about\ntesting right after the checkin to shake out any problems, but I don't\nthink that's necessary any more.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 14:49:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Disk-based Hash Aggregation." }, { "msg_contents": "Jeff Davis <pgsql@j-davis.com> writes:\n> On Mon, 2020-03-23 at 02:05 -0400, Tom Lane wrote:\n>> If there's no way to test the feature in some significantly-cheaper\n>> way,\n>> perhaps we should move this test out to a separate script that's not\n>> run\n>> by default.\n\n> I'll rework the tests. I wanted to be a bit more aggressive about\n> testing right after the checkin to shake out any problems, but I don't\n> think that's necessary any more.\n\nFair enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 17:58:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Disk-based Hash Aggregation." } ]
[ { "msg_contents": "During tests, we catched an assertion failure in _bt_killitems() for \nposting tuple in unique index:\n\n/* kitem must have matching offnum when heap TIDs match */\nAssert(kitem->indexOffset == offnum);\n\nhttps://github.com/postgres/postgres/blob/master/src/backend/access/nbtree/nbtutils.c#L1809\n\nI struggle to understand the meaning of this assertion.\nDon't we allow the chance that posting tuple moved right on the page as \nthe comment says?\n\n  * We match items by heap TID before assuming they are the right ones to\n  * delete.  We cope with cases where items have moved right due to \ninsertions.\n\nIt seems that this is exactly the case for this failure.\nWe expected to find tuple at offset 121, but instead it is at offset \n125.  (see dump details below).\n\nUnfortunately I cannot attach test and core dump, since they rely on the \nenterprise multimaster extension code.\nHere are some details from the core dump, that I find essential:\n\nStack is\n_bt_killitems\n_bt_release_current_position\n_bt_release_scan_state\nbtrescan\nindex_rescan\nRelationFindReplTupleByIndex\n\n(gdb) p offnum\n$3 = 125\n(gdb) p *item\n$4 = {ip_blkid = {bi_hi = 0, bi_lo = 2}, ip_posid = 200}\n(gdb) p *kitem\n$5 = {heapTid = {ip_blkid = {bi_hi = 0, bi_lo = 2}, ip_posid = 200}, \nindexOffset = 121, tupleOffset = 32639}\n\n\nUnless I miss something, this assertion must be removed.\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n", "msg_date": "Thu, 19 Mar 2020 19:34:17 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "nbtree: assertion failure in _bt_killitems() for posting tuple" }, { "msg_contents": "On Thu, Mar 19, 2020 at 9:34 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> Unfortunately I cannot attach test and core dump, since they rely on the\n> enterprise multimaster extension code.\n> Here are some details from the core dump, that I find essential:\n>\n> Stack is\n> _bt_killitems\n> _bt_release_current_position\n> _bt_release_scan_state\n> btrescan\n> index_rescan\n> RelationFindReplTupleByIndex\n>\n> (gdb) p offnum\n> $3 = 125\n> (gdb) p *item\n> $4 = {ip_blkid = {bi_hi = 0, bi_lo = 2}, ip_posid = 200}\n> (gdb) p *kitem\n> $5 = {heapTid = {ip_blkid = {bi_hi = 0, bi_lo = 2}, ip_posid = 200},\n> indexOffset = 121, tupleOffset = 32639}\n>\n>\n> Unless I miss something, this assertion must be removed.\n\nIs this index an unlogged index, under the hood?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Mar 2020 17:00:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtree: assertion failure in _bt_killitems() for posting tuple" }, { "msg_contents": "On Thu, Mar 19, 2020 at 9:34 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> During tests, we catched an assertion failure in _bt_killitems() for\n> posting tuple in unique index:\n>\n> /* kitem must have matching offnum when heap TIDs match */\n> Assert(kitem->indexOffset == offnum);\n>\n> https://github.com/postgres/postgres/blob/master/src/backend/access/nbtree/nbtutils.c#L1809\n>\n> I struggle to understand the meaning of this assertion.\n> Don't we allow the chance that posting tuple moved right on the page as\n> the comment says?\n\nI think you're right. However, it still seems like we should check\nthat \"kitem->indexOffset\" is consistent among all of the BTScanPosItem\nentries that we have for each TID that we believe to be from the same\nposting list tuple.\n\n(Thinks some more...)\n\nEven if the offnum changes when the buffer lock is released, due to\nsomebody inserting on to the same page, I guess that we still expect\nto observe all of the heap TIDs together in the posting list. Though\nmaybe not. Maybe it's possible for a deduplication pass to occur when\nthe buffer lock is dropped, in which case we should arguably behave in\nthe same way when we see the same heap TIDs (i.e. delete the entire\nposting list without regard for whether or not the TIDs happened to\nappear in a posting list initially). I'm not sure, though.\n\nIt will make no difference most of the time, since the\nkill_prior_tuple stuff is generally not applied when the page is\nchanged at all -- the LSN is checked by the logic added by commit\n2ed5b87f. That's why I asked about unlogged indexes (we don't do the\nLSN thing there). But I still think that we need to take a firm\nposition on it.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 19 Mar 2020 17:34:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtree: assertion failure in _bt_killitems() for posting tuple" }, { "msg_contents": "On 20.03.2020 03:34, Peter Geoghegan wrote:\n> On Thu, Mar 19, 2020 at 9:34 AM Anastasia Lubennikova\n> <a.lubennikova@postgrespro.ru> wrote:\n>> During tests, we catched an assertion failure in _bt_killitems() for\n>> posting tuple in unique index:\n>>\n>> /* kitem must have matching offnum when heap TIDs match */\n>> Assert(kitem->indexOffset == offnum);\n>>\n>> https://github.com/postgres/postgres/blob/master/src/backend/access/nbtree/nbtutils.c#L1809\n>>\n>> I struggle to understand the meaning of this assertion.\n>> Don't we allow the chance that posting tuple moved right on the page as\n>> the comment says?\n> I think you're right. However, it still seems like we should check\n> that \"kitem->indexOffset\" is consistent among all of the BTScanPosItem\n> entries that we have for each TID that we believe to be from the same\n> posting list tuple.\nWhat kind of consistency do you mean here?\nWe can probably change this assertion to\n     Assert(kitem->indexOffset <= offnum);\n\nAnything else?\n> (Thinks some more...)\n>\n> Even if the offnum changes when the buffer lock is released, due to\n> somebody inserting on to the same page, I guess that we still expect\n> to observe all of the heap TIDs together in the posting list. Though\n> maybe not. Maybe it's possible for a deduplication pass to occur when\n> the buffer lock is dropped, in which case we should arguably behave in\n> the same way when we see the same heap TIDs (i.e. delete the entire\n> posting list without regard for whether or not the TIDs happened to\n> appear in a posting list initially). I'm not sure, though.\n>\n> It will make no difference most of the time, since the\n> kill_prior_tuple stuff is generally not applied when the page is\n> changed at all -- the LSN is checked by the logic added by commit\n> 2ed5b87f. That's why I asked about unlogged indexes (we don't do the\n> LSN thing there). But I still think that we need to take a firm\n> position on it.\n>\nIt was a logged index. Though the failed test setup includes logical \nreplication. Does it handle LSNs differently?\nFinally, Alexander Lakhin managed to reproduce this on master. Test is \nattached as a patch.\n\nSpeaking of unlogged indexes. Now the situation, where items moved left \non the page is legal even if LSN haven't changed.\nAnyway, the cycle starts  from the offset that we saved in a first pass:\n\n       OffsetNumber offnum = kitem->indexOffset;\n       while (offnum <= maxoff)\n         ...\n\nIt still works correctly, but probably microvacuum becomes less \nefficient, if items were concurrently deduplicated.\nI wonder if this case worth optimizing?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 24 Mar 2020 11:00:17 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: nbtree: assertion failure in _bt_killitems() for posting tuple" }, { "msg_contents": "On Tue, Mar 24, 2020 at 1:00 AM Anastasia Lubennikova\n<a.lubennikova@postgrespro.ru> wrote:\n> > I think you're right. However, it still seems like we should check\n> > that \"kitem->indexOffset\" is consistent among all of the BTScanPosItem\n> > entries that we have for each TID that we believe to be from the same\n> > posting list tuple.\n\nThe assertion failure happens in the logical replication worker\nbecause it uses a dirty snapshot, which cannot release the pin per\ncommit 2ed5b87f. This means that the leaf page can change between the\ntime that we observe an item is dead, and the time we reach\n_bt_killitems(), even though _bt_killitems() does get to kill items.\n\nI am thinking about pushing a fix along the lines of the attached\npatch. This preserves the assertion, while avoiding the check in cases\nwhere it doesn't apply, such as when a dirty snapshot is in use.\n\n-- \nPeter Geoghegan", "msg_date": "Sun, 5 Apr 2020 17:15:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtree: assertion failure in _bt_killitems() for posting tuple" }, { "msg_contents": "On Sun, Apr 5, 2020 at 5:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I am thinking about pushing a fix along the lines of the attached\n> patch. This preserves the assertion, while avoiding the check in cases\n> where it doesn't apply, such as when a dirty snapshot is in use.\n\nPushed. Thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 6 Apr 2020 14:47:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: nbtree: assertion failure in _bt_killitems() for posting tuple" } ]
[ { "msg_contents": "Hi,\n\nI was working on Asymmetric encryption in postgres using pgcrypto . I have\ngenerated the keys and stored in data folder and had inserted the data\nusing pgcrypto encrypt function .\n\nhere the problem comes, I was trying to decrypt the data but it was\nthrowing me the below error\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\nPlease find the below process which I followed\n\nGenerated the keys :\nCREATE EXTENSION pgcrypto;\n\n$ gpg --list-keys\n/home/ec2-user/.gnupg/pubring.gpg\n--------------------------------\n\npub 2048R/8GGGFF 2020-03-19\nuid [ultimate] postgres\nsub 2048R/GGGFF7 2020-03-19\n\ncreate table users(username varchar(100),id integer,ssn bytea);\n\npostgres=# INSERT INTO users\nVALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));\n\nINSERT 0 1\n\npostgres=# SELECT\npgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata\nFROM users;\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\npostgres=# show client_encoding;\n client_encoding\n-----------------\n UTF8\n(1 row)\n\npostgres=# show server_encoding;\n server_encoding\n-----------------\n UTF8\n(1 row)\n\nCan anyone please help me on this , I am not sure why I was getting this\nerror.\n\nHi,I was working on Asymmetric encryption in postgres using pgcrypto . I have generated the keys and stored in data folder and had  inserted the data using pgcrypto encrypt function .here the problem comes, I was trying to decrypt the data but it was throwing me the below error ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95Please find the below process which I followed Generated the keys :CREATE EXTENSION pgcrypto;$ gpg --list-keys/home/ec2-user/.gnupg/pubring.gpg--------------------------------pub   2048R/8GGGFF 2020-03-19uid       [ultimate] postgressub   2048R/GGGFF7 2020-03-19create table users(username varchar(100),id integer,ssn bytea);postgres=# INSERT INTO users VALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));INSERT 0 1postgres=# SELECT pgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata FROM users;ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95postgres=# show client_encoding; client_encoding ----------------- UTF8(1 row)postgres=# show server_encoding; server_encoding ----------------- UTF8(1 row)Can anyone please help me on this , I am not sure why I was getting this error.", "msg_date": "Thu, 19 Mar 2020 16:28:30 -0400", "msg_from": "Chaitanya bodlapati <chaitanya.bodlapati4330@gmail.com>", "msg_from_op": true, "msg_subject": "invalid byte sequence for encoding \"UTF8\": 0x95-while using PGP\n Encryption -PostgreSQL" }, { "msg_contents": "Hi,\n\nI was working on Asymmetric encryption in postgres using pgcrypto . I have\ngenerated the keys and stored in data folder and had inserted the data\nusing pgcrypto encrypt function .\n\nhere the problem comes, I was trying to decrypt the data but it was\nthrowing me the below error\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\nPlease find the below process which I followed\n\nGenerated the keys :\nCREATE EXTENSION pgcrypto;\n\n$ gpg --list-keys\n/home/ec2-user/.gnupg/pubring.gpg\n--------------------------------\n\npub 2048R/8GGGFF 2020-03-19\nuid [ultimate] postgres\nsub 2048R/GGGFF7 2020-03-19\n\ncreate table users(username varchar(100),id integer,ssn bytea);\n\npostgres=# INSERT INTO users\nVALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));\n\nINSERT 0 1\n\npostgres=# SELECT\npgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata\nFROM users;\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\npostgres=# show client_encoding;\n client_encoding\n-----------------\n UTF8\n(1 row)\n\npostgres=# show server_encoding;\n server_encoding\n-----------------\n UTF8\n(1 row)\n\nCan anyone please help me on this , I am not sure why I was getting this\nerror.\n\nHi,I was working on Asymmetric encryption in postgres using pgcrypto . I have generated the keys and stored in data folder and had  inserted the data using pgcrypto encrypt function .here the problem comes, I was trying to decrypt the data but it was throwing me the below error ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95Please find the below process which I followed Generated the keys :CREATE EXTENSION pgcrypto;$ gpg --list-keys/home/ec2-user/.gnupg/pubring.gpg--------------------------------pub   2048R/8GGGFF 2020-03-19uid       [ultimate] postgressub   2048R/GGGFF7 2020-03-19create table users(username varchar(100),id integer,ssn bytea);postgres=# INSERT INTO users VALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));INSERT 0 1postgres=# SELECT pgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata FROM users;ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95postgres=# show client_encoding; client_encoding ----------------- UTF8(1 row)postgres=# show server_encoding; server_encoding ----------------- UTF8(1 row)Can anyone please help me on this , I am not sure why I was getting this error.", "msg_date": "Thu, 19 Mar 2020 16:29:46 -0400", "msg_from": "Chaitanya bodlapati <chaitanya.bodlapati4330@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: invalid byte sequence for encoding \"UTF8\": 0x95-while using PGP\n Encryption -PostgreSQL" }, { "msg_contents": "Hi,\n\nI was working on Asymmetric encryption in postgres using pgcrypto . I have\ngenerated the keys and stored in data folder and had inserted the data\nusing pgcrypto encrypt function .\n\nhere the problem comes, I was trying to decrypt the data but it was\nthrowing me the below error\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\nPlease find the below process which I followed\n\nGenerated the keys :\nCREATE EXTENSION pgcrypto;\n\n$ gpg --list-keys\n/home/ec2-user/.gnupg/pubring.gpg\n--------------------------------\n\npub 2048R/8GGGFF 2020-03-19\nuid [ultimate] postgres\nsub 2048R/GGGFF7 2020-03-19\n\ncreate table users(username varchar(100),id integer,ssn bytea);\n\npostgres=# INSERT INTO users\nVALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));\n\nINSERT 0 1\n\npostgres=# SELECT\npgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata\nFROM users;\n\nERROR: invalid byte sequence for encoding \"UTF8\": 0x95\n\npostgres=# show client_encoding;\n client_encoding\n-----------------\n UTF8\n(1 row)\n\npostgres=# show server_encoding;\n server_encoding\n-----------------\n UTF8\n(1 row)\n\nCan anyone please help me on this , I am not sure why I was getting this\nerror.\n\nHi,I was working on Asymmetric encryption in postgres using pgcrypto . I have generated the keys and stored in data folder and had  inserted the data using pgcrypto encrypt function .here the problem comes, I was trying to decrypt the data but it was throwing me the below error ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95Please find the below process which I followed Generated the keys :CREATE EXTENSION pgcrypto;$ gpg --list-keys/home/ec2-user/.gnupg/pubring.gpg--------------------------------pub   2048R/8GGGFF 2020-03-19uid       [ultimate] postgressub   2048R/GGGFF7 2020-03-19create table users(username varchar(100),id integer,ssn bytea);postgres=# INSERT INTO users VALUES('randomname',7,pgp_pub_encrypt('434-88-8880',dearmor(pg_read_file('keys/public.key'))));INSERT 0 1postgres=# SELECT pgp_pub_decrypt(ssn,dearmor(pg_read_file('keys/private.key'))) AS mydata FROM users;ERROR:  invalid byte sequence for encoding \"UTF8\": 0x95postgres=# show client_encoding; client_encoding ----------------- UTF8(1 row)postgres=# show server_encoding; server_encoding ----------------- UTF8(1 row)Can anyone please help me on this , I am not sure why I was getting this error.", "msg_date": "Thu, 19 Mar 2020 18:11:55 -0400", "msg_from": "Chaitanya bodlapati <chaitanya.bodlapati4330@gmail.com>", "msg_from_op": true, "msg_subject": "Fwd: invalid byte sequence for encoding \"UTF8\": 0x95-while using PGP\n Encryption -PostgreSQL" } ]
[ { "msg_contents": "Hi,\n\nI was looking at [1], wanting to suggest a query to monitor what\nautovacuum is mostly waiting on. Partially to figure out whether it's\nmostly autovacuum cost limiting.\n\nBut uh, unfortunately the vacuum delay code just sleeps without setting\na wait event:\n\nvoid\nvacuum_delay_point(void)\n{\n...\n\t/* Nap if appropriate */\n\tif (msec > 0)\n\t{\n\t\tif (msec > VacuumCostDelay * 4)\n\t\t\tmsec = VacuumCostDelay * 4;\n\n\t\tpg_usleep((long) (msec * 1000));\n\n\nSeems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\nclass?\n\nGiven how frequently we run into trouble with [auto]vacuum throttling\nbeing a problem, and there not being any way to monitor that currently,\nthat seems like it'd be a significant improvement, given the effort?\n\n\nIt'd probably also be helpful to report the total time [auto]vacuum\nspent being delayed for vacuum verbose/autovacuum logging, but imo\nthat'd be a parallel feature to a wait event, not a replacement.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CAE39h22zPLrkH17GrkDgAYL3kbjvySYD1io%2BrtnAUFnaJJVS4g%40mail.gmail.com\n\n\n", "msg_date": "Thu, 19 Mar 2020 15:44:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Fri, Mar 20, 2020 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I was looking at [1], wanting to suggest a query to monitor what\n> autovacuum is mostly waiting on. Partially to figure out whether it's\n> mostly autovacuum cost limiting.\n>\n> But uh, unfortunately the vacuum delay code just sleeps without setting\n> a wait event:\n>\n> void\n> vacuum_delay_point(void)\n> {\n> ...\n> /* Nap if appropriate */\n> if (msec > 0)\n> {\n> if (msec > VacuumCostDelay * 4)\n> msec = VacuumCostDelay * 4;\n>\n> pg_usleep((long) (msec * 1000));\n>\n>\n> Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\n> class?\n>\n\n+1. I think it will be quite helpful.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Mar 2020 17:01:57 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Fri, Mar 20, 2020 at 12:32 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 20, 2020 at 4:15 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > I was looking at [1], wanting to suggest a query to monitor what\n> > autovacuum is mostly waiting on. Partially to figure out whether it's\n> > mostly autovacuum cost limiting.\n> >\n> > But uh, unfortunately the vacuum delay code just sleeps without setting\n> > a wait event:\n> >\n> > void\n> > vacuum_delay_point(void)\n> > {\n> > ...\n> > /* Nap if appropriate */\n> > if (msec > 0)\n> > {\n> > if (msec > VacuumCostDelay * 4)\n> > msec = VacuumCostDelay * 4;\n> >\n> > pg_usleep((long) (msec * 1000));\n> >\n> >\n> > Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\n> > class?\n> >\n>\n> +1. I think it will be quite helpful.\n\nDefinite +1. There should be a wait event, and identifying this\nparticular case is certainly interesting enough that it should have\nit's own.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Fri, 20 Mar 2020 12:54:31 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Thu, Mar 19, 2020 at 03:44:49PM -0700, Andres Freund wrote:\n> But uh, unfortunately the vacuum delay code just sleeps without setting\n> a wait event:\n...\n> Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\n> class?\n> \n> Given how frequently we run into trouble with [auto]vacuum throttling\n> being a problem, and there not being any way to monitor that currently,\n> that seems like it'd be a significant improvement, given the effort?\n\nI see that pg_sleep sets WAIT_EVENT_PG_SLEEP, but pg_usleep doesn't, I guess\nbecause the overhead is significant for a small number of usecs, and because it\ndoesn't work for pg_usleep to set a generic event if callers want to be able to\nset a more specific wait event.\n\nAlso, I noticed that SLEEP_ON_ASSERT comment (31338352b) wants to use pg_usleep\n\"which seems too short.\". Surely it should use pg_sleep, added at 782eefc58 a\nfew years later ?\n\nAlso, there was a suggestion recently that this should have a separate\nvacuum_progress phase:\n|vacuumlazy.c:#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL\t50 /* ms */\n|vacuumlazy.c:pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\n\nI was planning to look at that eventually ; should it have a wait event instead\nor in addition ?\n\n> It'd probably also be helpful to report the total time [auto]vacuum\n> spent being delayed for vacuum verbose/autovacuum logging, but imo\n> that'd be a parallel feature to a wait event, not a replacement.\n\nThis requires wider changes than I anticipated.\n\n2020-03-20 22:35:51.308 CDT [16534] LOG: automatic aggressive vacuum of table \"template1.pg_catalog.pg_class\": index scans: 1\n pages: 0 removed, 11 remain, 0 skipped due to pins, 0 skipped frozen\n tuples: 6 removed, 405 remain, 0 are dead but not yet removable, oldest xmin: 1574\n buffer usage: 76 hits, 7 misses, 8 dirtied\n avg read rate: 16.378 MB/s, avg write rate: 18.718 MB/s\n Cost-based delay: 2 msec\n system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n\nVACUUM VERBOSE wouldn't normally be run with cost_delay > 0, so that field will\ntypically be zero, so I made it conditional, which is supposedly why it's\nwritten like that, even though that's not otherwise being used since 17eaae98.\n\nAdded at https://commitfest.postgresql.org/28/2515/\n\n-- \nJustin", "msg_date": "Fri, 20 Mar 2020 23:07:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Sat, 21 Mar 2020 at 09:38, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Thu, Mar 19, 2020 at 03:44:49PM -0700, Andres Freund wrote:\n> > But uh, unfortunately the vacuum delay code just sleeps without setting\n> > a wait event:\n> ...\n> > Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\n> > class?\n> >\n> > Given how frequently we run into trouble with [auto]vacuum throttling\n> > being a problem, and there not being any way to monitor that currently,\n> > that seems like it'd be a significant improvement, given the effort?\n>\n> I see that pg_sleep sets WAIT_EVENT_PG_SLEEP, but pg_usleep doesn't, I\nguess\n> because the overhead is significant for a small number of usecs, and\nbecause it\n> doesn't work for pg_usleep to set a generic event if callers want to be\nable to\n> set a more specific wait event.\n>\n> Also, I noticed that SLEEP_ON_ASSERT comment (31338352b) wants to use\npg_usleep\n> \"which seems too short.\". Surely it should use pg_sleep, added at\n782eefc58 a\n> few years later ?\n>\n> Also, there was a suggestion recently that this should have a separate\n> vacuum_progress phase:\n> |vacuumlazy.c:#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL 50 /* ms\n*/\n> |vacuumlazy.c:pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\n>\n> I was planning to look at that eventually ; should it have a wait event\ninstead\n> or in addition ?\n>\n> > It'd probably also be helpful to report the total time [auto]vacuum\n> > spent being delayed for vacuum verbose/autovacuum logging, but imo\n> > that'd be a parallel feature to a wait event, not a replacement.\n>\n> This requires wider changes than I anticipated.\n>\n> 2020-03-20 22:35:51.308 CDT [16534] LOG: automatic aggressive vacuum of\ntable \"template1.pg_catalog.pg_class\": index scans: 1\n> pages: 0 removed, 11 remain, 0 skipped due to pins, 0 skipped\nfrozen\n> tuples: 6 removed, 405 remain, 0 are dead but not yet removable,\noldest xmin: 1574\n> buffer usage: 76 hits, 7 misses, 8 dirtied\n> avg read rate: 16.378 MB/s, avg write rate: 18.718 MB/s\n> Cost-based delay: 2 msec\n> system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n>\n> VACUUM VERBOSE wouldn't normally be run with cost_delay > 0, so that\nfield will\n> typically be zero, so I made it conditional, which is supposedly why it's\n> written like that, even though that's not otherwise being used since\n17eaae98.\n>\n> Added at https://commitfest.postgresql.org/28/2515/\n>\n> --\n> Justin\n\nThanks Justin for quick patch.\n\nI haven't reviewed your full patch but I can see that \"make installcheck\"\nis failing with segment fault.\n\n*Stack trace;*\nCore was generated by `postgres: autovacuum worker regression\n '.\nProgram terminated with signal SIGSEGV, Segmentation fault.\n#0 0x0000560080cc827c in ginInsertCleanup (ginstate=0x7ffe0b648980,\nfull_clean=false, fill_fsm=true, forceCleanup=true, stats=0x0) at\nginfast.c:895\n895 stats->delay_msec += vacuum_delay_point();\n(gdb) bt\n#0 0x0000560080cc827c in ginInsertCleanup (ginstate=0x7ffe0b648980,\nfull_clean=false, fill_fsm=true, forceCleanup=true, stats=0x0) at\nginfast.c:895\n#1 0x0000560080cdd0c3 in ginvacuumcleanup (info=0x7ffe0b64b0c0, stats=0x0)\nat ginvacuum.c:706\n#2 0x0000560080d791d4 in index_vacuum_cleanup (info=0x7ffe0b64b0c0,\nstats=0x0) at indexam.c:711\n#3 0x0000560080fa790e in do_analyze_rel (onerel=0x56008259e6e0,\nparams=0x560082206de4, va_cols=0x0, acquirefunc=0x560080fa8a75\n<acquire_sample_rows>, relpages=25, inh=false,\n in_outer_xact=false, elevel=13) at analyze.c:683\n#4 0x0000560080fa5f3e in analyze_rel (relid=37789,\nrelation=0x5600822ba1a0, params=0x560082206de4, va_cols=0x0,\nin_outer_xact=false, bstrategy=0x5600822064e0) at analyze.c:263\n#5 0x00005600810d9eb7 in vacuum (relations=0x56008227e5b8,\nparams=0x560082206de4, bstrategy=0x5600822064e0, isTopLevel=true) at\nvacuum.c:468\n#6 0x0000560081357608 in autovacuum_do_vac_analyze (tab=0x560082206de0,\nbstrategy=0x5600822064e0) at autovacuum.c:3115\n#7 0x00005600813557dd in do_autovacuum () at autovacuum.c:2466\n#8 0x000056008135373d in AutoVacWorkerMain (argc=0, argv=0x0) at\nautovacuum.c:1693\n#9 0x0000560081352f75 in StartAutoVacWorker () at autovacuum.c:1487\n#10 0x000056008137ed5f in StartAutovacuumWorker () at postmaster.c:5580\n#11 0x000056008137e199 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5297\n#12 <signal handler called>\n#13 0x00007f18b778bff7 in __GI___select (nfds=9, readfds=0x7ffe0b64c050,\nwritefds=0x0, exceptfds=0x0, timeout=0x7ffe0b64bfc0) at\n../sysdeps/unix/sysv/linux/select.c:41\n#14 0x000056008137499a in ServerLoop () at postmaster.c:1691\n#15 0x0000560081373e63 in PostmasterMain (argc=3, argv=0x560082189020) at\npostmaster.c:1400\n#16 0x00005600811d37ea in main (argc=3, argv=0x560082189020) at main.c:210\n\nHere, stats is null so it is crashing.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Sat, 21 Mar 2020 at 09:38, Justin Pryzby <pryzby@telsasoft.com> wrote:>> On Thu, Mar 19, 2020 at 03:44:49PM -0700, Andres Freund wrote:> > But uh, unfortunately the vacuum delay code just sleeps without setting> > a wait event:> ...> > Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT> > class?> >> > Given how frequently we run into trouble with [auto]vacuum throttling> > being a problem, and there not being any way to monitor that currently,> > that seems like it'd be a significant improvement, given the effort?>> I see that pg_sleep sets WAIT_EVENT_PG_SLEEP, but pg_usleep doesn't, I guess> because the overhead is significant for a small number of usecs, and because it> doesn't work for pg_usleep to set a generic event if callers want to be able to> set a more specific wait event.>> Also, I noticed that SLEEP_ON_ASSERT comment (31338352b) wants to use pg_usleep> \"which seems too short.\".  Surely it should use pg_sleep, added at 782eefc58 a> few years later ?>> Also, there was a suggestion recently that this should have a separate> vacuum_progress phase:> |vacuumlazy.c:#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL        50 /* ms */> |vacuumlazy.c:pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);>> I was planning to look at that eventually ; should it have a wait event instead> or in addition ?>> > It'd probably also be helpful to report the total time [auto]vacuum> > spent being delayed for vacuum verbose/autovacuum logging, but imo> > that'd be a parallel feature to a wait event, not a replacement.>> This requires wider changes than I anticipated.>> 2020-03-20 22:35:51.308 CDT [16534] LOG:  automatic aggressive vacuum of table \"template1.pg_catalog.pg_class\": index scans: 1>         pages: 0 removed, 11 remain, 0 skipped due to pins, 0 skipped frozen>         tuples: 6 removed, 405 remain, 0 are dead but not yet removable, oldest xmin: 1574>         buffer usage: 76 hits, 7 misses, 8 dirtied>         avg read rate: 16.378 MB/s, avg write rate: 18.718 MB/s>         Cost-based delay: 2 msec>         system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s>> VACUUM VERBOSE wouldn't normally be run with cost_delay > 0, so that field will> typically be zero, so I made it conditional, which is supposedly why it's> written like that, even though that's not otherwise being used since 17eaae98.>> Added at https://commitfest.postgresql.org/28/2515/>> --> JustinThanks Justin for quick patch.I haven't reviewed your full patch but I can see that \"make installcheck\" is failing with segment fault.Stack trace;Core was generated by `postgres: autovacuum worker regression                                    '.Program terminated with signal SIGSEGV, Segmentation fault.#0  0x0000560080cc827c in ginInsertCleanup (ginstate=0x7ffe0b648980, full_clean=false, fill_fsm=true, forceCleanup=true, stats=0x0) at ginfast.c:895895            stats->delay_msec += vacuum_delay_point();(gdb) bt#0  0x0000560080cc827c in ginInsertCleanup (ginstate=0x7ffe0b648980, full_clean=false, fill_fsm=true, forceCleanup=true, stats=0x0) at ginfast.c:895#1  0x0000560080cdd0c3 in ginvacuumcleanup (info=0x7ffe0b64b0c0, stats=0x0) at ginvacuum.c:706#2  0x0000560080d791d4 in index_vacuum_cleanup (info=0x7ffe0b64b0c0, stats=0x0) at indexam.c:711#3  0x0000560080fa790e in do_analyze_rel (onerel=0x56008259e6e0, params=0x560082206de4, va_cols=0x0, acquirefunc=0x560080fa8a75 <acquire_sample_rows>, relpages=25, inh=false,    in_outer_xact=false, elevel=13) at analyze.c:683#4  0x0000560080fa5f3e in analyze_rel (relid=37789, relation=0x5600822ba1a0, params=0x560082206de4, va_cols=0x0, in_outer_xact=false, bstrategy=0x5600822064e0) at analyze.c:263#5  0x00005600810d9eb7 in vacuum (relations=0x56008227e5b8, params=0x560082206de4, bstrategy=0x5600822064e0, isTopLevel=true) at vacuum.c:468#6  0x0000560081357608 in autovacuum_do_vac_analyze (tab=0x560082206de0, bstrategy=0x5600822064e0) at autovacuum.c:3115#7  0x00005600813557dd in do_autovacuum () at autovacuum.c:2466#8  0x000056008135373d in AutoVacWorkerMain (argc=0, argv=0x0) at autovacuum.c:1693#9  0x0000560081352f75 in StartAutoVacWorker () at autovacuum.c:1487#10 0x000056008137ed5f in StartAutovacuumWorker () at postmaster.c:5580#11 0x000056008137e199 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5297#12 <signal handler called>#13 0x00007f18b778bff7 in __GI___select (nfds=9, readfds=0x7ffe0b64c050, writefds=0x0, exceptfds=0x0, timeout=0x7ffe0b64bfc0) at ../sysdeps/unix/sysv/linux/select.c:41#14 0x000056008137499a in ServerLoop () at postmaster.c:1691#15 0x0000560081373e63 in PostmasterMain (argc=3, argv=0x560082189020) at postmaster.c:1400#16 0x00005600811d37ea in main (argc=3, argv=0x560082189020) at main.c:210Here, stats is null so it is crashing.-- Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Sun, 22 Mar 2020 05:24:29 +0530", "msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "Hi,\n\n> On Thu, Mar 19, 2020 at 03:44:49PM -0700, Andres Freund wrote:\n> > But uh, unfortunately the vacuum delay code just sleeps without setting\n> > a wait event:\n> ...\n> > Seems like it should instead use a new wait event in the PG_WAIT_TIMEOUT\n> > class?\n> > \n> > Given how frequently we run into trouble with [auto]vacuum throttling\n> > being a problem, and there not being any way to monitor that currently,\n> > that seems like it'd be a significant improvement, given the effort?\n> \n> I see that pg_sleep sets WAIT_EVENT_PG_SLEEP, but pg_usleep doesn't, I guess\n> because the overhead is significant for a small number of usecs, and because it\n> doesn't work for pg_usleep to set a generic event if callers want to be able to\n> set a more specific wait event.\n\nI don't think the overhead is a meaningful issue - compared to yielding\nto the kernel / context switching, setting the wait event isn't a\nsignificant cost.\n\nI think the issue is more the second part - it's used as part of other\nthings using their own wait events potentially.\n\nI think we should just rip out pg_usleep and replace it with latch\nwaits. While the case at hand is user configurable (but the max wait\ntime is 100ms, so it's not too bad), it's generally not great to sleep\nwithout ensuring that interrupts are handled. Nor is it great that we\ndon't sleep again if the sleep is interrupted. There may be a case or\ntwo where we don't want to layer ontop of latches (perhaps the spinlock\ndelay loop?), but pretty much everywhere else a different routine would\nmake more sense.\n\n\n> Also, I noticed that SLEEP_ON_ASSERT comment (31338352b) wants to use pg_usleep\n> \"which seems too short.\". Surely it should use pg_sleep, added at 782eefc58 a\n> few years later ?\n\nI don't see problem with using sleep here?\n\n\n> Also, there was a suggestion recently that this should have a separate\n> vacuum_progress phase:\n> |vacuumlazy.c:#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL\t50 /* ms */\n> |vacuumlazy.c:pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\n> \n> I was planning to look at that eventually ; should it have a wait event instead\n> or in addition ?\n\nA separate phase? How would that look like? We don't want to replace the\nknowledge that currently e.g. the heap scan is in progress?\n\n\n\n> > It'd probably also be helpful to report the total time [auto]vacuum\n> > spent being delayed for vacuum verbose/autovacuum logging, but imo\n> > that'd be a parallel feature to a wait event, not a replacement.\n> \n> This requires wider changes than I anticipated.\n> \n> 2020-03-20 22:35:51.308 CDT [16534] LOG: automatic aggressive vacuum of table \"template1.pg_catalog.pg_class\": index scans: 1\n> pages: 0 removed, 11 remain, 0 skipped due to pins, 0 skipped frozen\n> tuples: 6 removed, 405 remain, 0 are dead but not yet removable, oldest xmin: 1574\n> buffer usage: 76 hits, 7 misses, 8 dirtied\n> avg read rate: 16.378 MB/s, avg write rate: 18.718 MB/s\n> Cost-based delay: 2 msec\n> system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n> \n> VACUUM VERBOSE wouldn't normally be run with cost_delay > 0, so that field will\n> typically be zero, so I made it conditional\n\nI personally dislike conditional output like that, because it makes\nparsing the output harder.\n\n\n> , which is supposedly why it's written like that, even though that's\n> not otherwise being used since 17eaae98.\n\nWell, it's also just hard to otherwise manage this long translatable\nstrings. And we're essentially still conditional, due to the 'msgfmt'\nbranches - if the whole output were output in a single appendStringInfo\ncall, we'd have to duplicate all the following format strings too.\n\nPersonally I really wish we'd just merge the vacuum verbose and the\nautovacuum lgoging code, even if it causes some temporary pain.\n\n\nOn 2020-03-20 23:07:51 -0500, Justin Pryzby wrote:\n> From 68c5ad8c7a9feb0c68afad310e3f52c21c3cdbaf Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 20 Mar 2020 20:47:30 -0500\n> Subject: [PATCH v1 1/2] Report wait event for cost-based vacuum delay\n> \n> ---\n> doc/src/sgml/monitoring.sgml | 2 ++\n> src/backend/commands/vacuum.c | 2 ++\n> src/backend/postmaster/pgstat.c | 3 +++\n> src/include/pgstat.h | 3 ++-\n> 4 files changed, 9 insertions(+), 1 deletion(-)\n> \n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index 5bffdcce10..46c99a55b7 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -1507,6 +1507,8 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> (<filename>pg_wal</filename>, archive or stream) before trying\n> again to retrieve WAL data, at recovery.\n> </entry>\n> + <entry><literal>VacuumDelay</literal></entry>\n> + <entry>Waiting in a cost-based vacuum delay point.</entry>\n> </row>\n> <row>\n> <entry morerows=\"68\"><literal>IO</literal></entry>\n> diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> index d625d17bf4..59731d687f 100644\n> --- a/src/backend/commands/vacuum.c\n> +++ b/src/backend/commands/vacuum.c\n> @@ -2019,7 +2019,9 @@ vacuum_delay_point(void)\n> \t\tif (msec > VacuumCostDelay * 4)\n> \t\t\tmsec = VacuumCostDelay * 4;\n> \n> +\t\tpgstat_report_wait_start(WAIT_EVENT_VACUUM_DELAY);\n> \t\tpg_usleep((long) (msec * 1000));\n> +\t\tpgstat_report_wait_end();\n> \n> \t\tVacuumCostBalance = 0;\n> \n> diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c\n> index d29c211a76..742ec07b59 100644\n> --- a/src/backend/postmaster/pgstat.c\n> +++ b/src/backend/postmaster/pgstat.c\n> @@ -3824,6 +3824,9 @@ pgstat_get_wait_timeout(WaitEventTimeout w)\n> \t\tcase WAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL:\n> \t\t\tevent_name = \"RecoveryRetrieveRetryInterval\";\n> \t\t\tbreak;\n> +\t\tcase WAIT_EVENT_VACUUM_DELAY:\n> +\t\t\tevent_name = \"VacuumDelay\";\n> +\t\t\tbreak;\n> \t\t\t/* no default case, so that compiler will warn */\n> \t}\n> \n> diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> index 851d0a7246..4db40e23cc 100644\n> --- a/src/include/pgstat.h\n> +++ b/src/include/pgstat.h\n> @@ -848,7 +848,8 @@ typedef enum\n> \tWAIT_EVENT_BASE_BACKUP_THROTTLE = PG_WAIT_TIMEOUT,\n> \tWAIT_EVENT_PG_SLEEP,\n> \tWAIT_EVENT_RECOVERY_APPLY_DELAY,\n> -\tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL\n> +\tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL,\n> +\tWAIT_EVENT_VACUUM_DELAY,\n> } WaitEventTimeout;\n\nLooks good to me - unless somebody protests I'm going to apply this\nshortly.\n\n\n> From 8153d909cabb94474890f0b55c7733f33923e3c5 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby <pryzbyj@telsasoft.com>\n> Date: Fri, 20 Mar 2020 22:08:09 -0500\n> Subject: [PATCH v1 2/2] vacuum to report time spent in cost-based delay\n> \n> ---\n> src/backend/access/gin/ginfast.c | 6 +++---\n> src/backend/access/gin/ginvacuum.c | 6 +++---\n> src/backend/access/gist/gistvacuum.c | 2 +-\n> src/backend/access/hash/hash.c | 2 +-\n> src/backend/access/heap/vacuumlazy.c | 17 +++++++++++------\n> src/backend/access/nbtree/nbtree.c | 2 +-\n> src/backend/access/spgist/spgvacuum.c | 4 ++--\n> src/backend/commands/vacuum.c | 8 ++++++--\n> src/include/access/genam.h | 1 +\n> src/include/commands/vacuum.h | 2 +-\n> 10 files changed, 30 insertions(+), 20 deletions(-)\n> \n> diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c\n> index 11d7ec067a..c99dc4a8be 100644\n> --- a/src/backend/access/gin/ginfast.c\n> +++ b/src/backend/access/gin/ginfast.c\n> @@ -892,7 +892,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> \t\t */\n> \t\tprocessPendingPage(&accum, &datums, page, FirstOffsetNumber);\n> \n> -\t\tvacuum_delay_point();\n> +\t\tstats->delay_msec += vacuum_delay_point();\n> \n> \t\t/*\n> \t\t * Is it time to flush memory to disk?\tFlush if we are at the end of\n> @@ -929,7 +929,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> \t\t\t{\n> \t\t\t\tginEntryInsert(ginstate, attnum, key, category,\n> \t\t\t\t\t\t\t list, nlist, NULL);\n> -\t\t\t\tvacuum_delay_point();\n> +\t\t\t\tstats->delay_msec += vacuum_delay_point();\n> \t\t\t}\n> \n> \t\t\t/*\n> @@ -1002,7 +1002,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> \t\t/*\n> \t\t * Read next page in pending list\n> \t\t */\n> -\t\tvacuum_delay_point();\n> +\t\tstats->delay_msec += vacuum_delay_point();\n> \t\tbuffer = ReadBuffer(index, blkno);\n> \t\tLockBuffer(buffer, GIN_SHARE);\n> \t\tpage = BufferGetPage(buffer);\n\nOn a green field I'd really like to pass a 'vacuum state' struct to\nvacuum_delay_point(). But that likely would be too invasive to add,\nbecause it seems like it'd would have to be wired to a number of\nfunctions that can be used by extensions etc (like the bulk delete\ncallbacks). Then we'd just have vacuum_delay_point() internally sum up\nthe waiting time.\n\nGiven the current style of vacuum_delay_point() calculating everything\nvia globals (which I hate), it might be less painful to do this by\nadding another global to track the sleeps via a global alongside\nVacuumCostBalance?\n\n\n> diff --git a/src/backend/access/hash/hash.c b/src/backend/access/hash/hash.c\n> index 4871b7ff4d..86a9c7fdaa 100644\n> --- a/src/backend/access/hash/hash.c\n> +++ b/src/backend/access/hash/hash.c\n> @@ -709,7 +709,7 @@ hashbucketcleanup(Relation rel, Bucket cur_bucket, Buffer bucket_buf,\n> \t\tbool\t\tretain_pin = false;\n> \t\tbool\t\tclear_dead_marking = false;\n> \n> -\t\tvacuum_delay_point();\n> +\t\t// XXX stats->delay_msec += vacuum_delay_point();\n> \n> \t\tpage = BufferGetPage(buf);\n> \t\topaque = (HashPageOpaque) PageGetSpecialPointer(page);\n\nI assume this is because there's no stats object reachable here? Should\nstill continue to call vacuum_delay_point ;)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 21 Mar 2020 17:24:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Sat, Mar 21, 2020 at 5:25 PM Andres Freund <andres@anarazel.de> wrote:\n> > diff --git a/src/backend/access/gin/ginfast.c b/src/backend/access/gin/ginfast.c\n> > index 11d7ec067a..c99dc4a8be 100644\n> > --- a/src/backend/access/gin/ginfast.c\n> > +++ b/src/backend/access/gin/ginfast.c\n> > @@ -892,7 +892,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> > */\n> > processPendingPage(&accum, &datums, page, FirstOffsetNumber);\n> >\n> > - vacuum_delay_point();\n> > + stats->delay_msec += vacuum_delay_point();\n> >\n> > /*\n> > * Is it time to flush memory to disk? Flush if we are at the end of\n> > @@ -929,7 +929,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> > {\n> > ginEntryInsert(ginstate, attnum, key, category,\n> > list, nlist, NULL);\n> > - vacuum_delay_point();\n> > + stats->delay_msec += vacuum_delay_point();\n> > }\n> >\n> > /*\n> > @@ -1002,7 +1002,7 @@ ginInsertCleanup(GinState *ginstate, bool full_clean,\n> > /*\n> > * Read next page in pending list\n> > */\n> > - vacuum_delay_point();\n> > + stats->delay_msec += vacuum_delay_point();\n> > buffer = ReadBuffer(index, blkno);\n> > LockBuffer(buffer, GIN_SHARE);\n> > page = BufferGetPage(buffer);\n>\n> On a green field I'd really like to pass a 'vacuum state' struct to\n> vacuum_delay_point().\n\nIn a green field situation, there'd be no ginInsertCleanup() at all.\nIt is a Lovecraftian horror show. The entire thing should be scrapped\nnow, in fact.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Sat, 21 Mar 2020 17:51:19 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "Hi,\n\nOn March 21, 2020 5:51:19 PM PDT, Peter Geoghegan <pg@bowt.ie> wrote:\n>On Sat, Mar 21, 2020 at 5:25 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>> > diff --git a/src/backend/access/gin/ginfast.c\n>b/src/backend/access/gin/ginfast.c\n>> > index 11d7ec067a..c99dc4a8be 100644\n>> > --- a/src/backend/access/gin/ginfast.c\n>> > +++ b/src/backend/access/gin/ginfast.c\n>> > @@ -892,7 +892,7 @@ ginInsertCleanup(GinState *ginstate, bool\n>full_clean,\n>> > */\n>> > processPendingPage(&accum, &datums, page,\n>FirstOffsetNumber);\n>> >\n>> > - vacuum_delay_point();\n>> > + stats->delay_msec += vacuum_delay_point();\n>> >\n>> > /*\n>> > * Is it time to flush memory to disk? Flush if we\n>are at the end of\n>> > @@ -929,7 +929,7 @@ ginInsertCleanup(GinState *ginstate, bool\n>full_clean,\n>> > {\n>> > ginEntryInsert(ginstate, attnum, key,\n>category,\n>> > list,\n>nlist, NULL);\n>> > - vacuum_delay_point();\n>> > + stats->delay_msec +=\n>vacuum_delay_point();\n>> > }\n>> >\n>> > /*\n>> > @@ -1002,7 +1002,7 @@ ginInsertCleanup(GinState *ginstate, bool\n>full_clean,\n>> > /*\n>> > * Read next page in pending list\n>> > */\n>> > - vacuum_delay_point();\n>> > + stats->delay_msec += vacuum_delay_point();\n>> > buffer = ReadBuffer(index, blkno);\n>> > LockBuffer(buffer, GIN_SHARE);\n>> > page = BufferGetPage(buffer);\n>>\n>> On a green field I'd really like to pass a 'vacuum state' struct to\n>> vacuum_delay_point().\n>\n>In a green field situation, there'd be no ginInsertCleanup() at all.\n>It is a Lovecraftian horror show. The entire thing should be scrapped\n>now, in fact.\n\nMy comment is entirely unrelated to GIN, but about the way the delay infrastructure manages state (in global vars).\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 21 Mar 2020 17:53:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Sat, Mar 21, 2020 at 5:53 PM Andres Freund <andres@anarazel.de> wrote:\n> My comment is entirely unrelated to GIN, but about the way the delay infrastructure manages state (in global vars).\n\nThe fact that ginInsertCleanup() uses \"stats != NULL\" to indicate\nwhether it is being called from within VACUUM or not is surely\nrelevant, or at least relevant to the issue that Mahendra just\nreported. shiftList() relies on this directly already.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 21 Mar 2020 17:59:25 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "On Sat, Mar 21, 2020 at 05:24:57PM -0700, Andres Freund wrote:\n> > Also, I noticed that SLEEP_ON_ASSERT comment (31338352b) wants to use pg_usleep\n> > \"which seems too short.\". Surely it should use pg_sleep, added at 782eefc58 a\n> > few years later ?\n> \n> I don't see problem with using sleep here?\n\nThere's no problem with pg_sleep (with no \"u\") - it just didn't exist when\nSLEEP_ON_ASSERT was added (and I guess it's potentially unsafe to do much of\nanything, like loop around pg_usleep(1e6)). I'm suggesting it *should* use\npg_sleep, rather than explaining why pg_usleep (with a \"u\") doesn't work.\n\n> > Also, there was a suggestion recently that this should have a separate\n> > vacuum_progress phase:\n> > |vacuumlazy.c:#define VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL\t50 /* ms */\n> > |vacuumlazy.c:pg_usleep(VACUUM_TRUNCATE_LOCK_WAIT_INTERVAL * 1000L);\n> > \n> > I was planning to look at that eventually ; should it have a wait event instead\n> > or in addition ?\n> \n> A separate phase? How would that look like? We don't want to replace the\n> knowledge that currently e.g. the heap scan is in progress?\n\nI don't think that's an issue, since the heap scan is done at that point ?\nheap_vacuum_rel() (the publicly callable routine) calls lazy_scan_heap (which\ndoes everything) and then (optionally) lazy_truncate_heap() and then\nimmediately afterwards does:\n\n pgstat_progress_update_param(PROGRESS_VACUUM_PHASE,\n\t\t\tPROGRESS_VACUUM_PHASE_FINAL_CLEANUP);\n ...\n pgstat_progress_end_command();\n\n> > VACUUM VERBOSE wouldn't normally be run with cost_delay > 0, so that field will\n> > typically be zero, so I made it conditional\n> \n> I personally dislike conditional output like that, because it makes\n> parsing the output harder.\n\nI dislike it too, mostly because there's a comment explaining why it's done\nlike that, without any desirable use of the functionality. If it's not useful\nfor a case where the field is typically zero, it should go away until its\nutility is instantiated.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Mar 2020 09:38:29 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" }, { "msg_contents": "Hi,\n\nOn 2020-03-21 17:24:57 -0700, Andres Freund wrote:\n> > diff --git a/src/include/pgstat.h b/src/include/pgstat.h\n> > index 851d0a7246..4db40e23cc 100644\n> > --- a/src/include/pgstat.h\n> > +++ b/src/include/pgstat.h\n> > @@ -848,7 +848,8 @@ typedef enum\n> > \tWAIT_EVENT_BASE_BACKUP_THROTTLE = PG_WAIT_TIMEOUT,\n> > \tWAIT_EVENT_PG_SLEEP,\n> > \tWAIT_EVENT_RECOVERY_APPLY_DELAY,\n> > -\tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL\n> > +\tWAIT_EVENT_RECOVERY_RETRIEVE_RETRY_INTERVAL,\n> > +\tWAIT_EVENT_VACUUM_DELAY,\n> > } WaitEventTimeout;\n> \n> Looks good to me - unless somebody protests I'm going to apply this\n> shortly.\n\nAnd pushed. The only thing I changed was to remove the added trailing ,\n:)\n\nThanks for the patch,\n\nAndres\n\n\n", "msg_date": "Mon, 23 Mar 2020 23:00:32 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Why does [auto-]vacuum delay not report a wait event?" } ]
[ { "msg_contents": "I noticed that when you want to define EXEC_BACKEND, perhaps to do some \nnon-Windows testing of that option, it's not clear how to do that. I \nwould have expected it in pg_config_manual.h, but it's actually in \nconfigure.in and in MSBuildProject.pm. So if you want to define it \nyourself, you kind of have to make up your own way to do it.\n\nI don't see why this should be like that. I propose the attached patch \nto move the thing to pg_config_manual.h.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Fri, 20 Mar 2020 16:24:39 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "where EXEC_BACKEND is defined" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I noticed that when you want to define EXEC_BACKEND, perhaps to do some \n> non-Windows testing of that option, it's not clear how to do that. I \n> would have expected it in pg_config_manual.h, but it's actually in \n> configure.in and in MSBuildProject.pm. So if you want to define it \n> yourself, you kind of have to make up your own way to do it.\n\nYeah. Personally, I tend to add the #define to pg_config.h\nmanually after running configure; I find this convenient because\nthe effects automatically go away at \"make distclean\", and I\ndon't have to remember to undo anything.\n\n> I don't see why this should be like that. I propose the attached patch \n> to move the thing to pg_config_manual.h.\n\nI wouldn't use the option of editing pg_config_manual.h myself, because\nof the need to undo it + risk of forgetting and committing that as part\nof a patch. Still, this patch doesn't get in the way of continuing to\nset it in pg_config.h, and it does offer a place to document the thing\nwhich is a good idea as you say. So no objection here.\n\nOne small point is that I believe the existing code has the effect of\n\"#define EXEC_BACKEND 1\" not just \"#define EXEC_BACKEND\". I don't\nthink this matters to anyplace in the core code, but it's conceivable\nthat somebody has extension code written to assume the former.\nNonetheless, I'm +1 for re-standardizing on the latter, because it's\na couple less keystrokes when inserting a manual definition ;-).\nMight be worth mentioning in the commit log entry though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Mar 2020 12:36:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: where EXEC_BACKEND is defined" }, { "msg_contents": "On 2020-03-20 17:36, Tom Lane wrote:\n> One small point is that I believe the existing code has the effect of\n> \"#define EXEC_BACKEND 1\" not just \"#define EXEC_BACKEND\". I don't\n> think this matters to anyplace in the core code, but it's conceivable\n> that somebody has extension code written to assume the former.\n> Nonetheless, I'm +1 for re-standardizing on the latter, because it's\n> a couple less keystrokes when inserting a manual definition ;-).\n> Might be worth mentioning in the commit log entry though.\n\nOk, done that way.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:39:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: where EXEC_BACKEND is defined" } ]
[ { "msg_contents": "Hello,\n\nI’ve recently tried to build an extension that employs C++ files and also\npasses them to a linker to make a shared library. I’ve discovered a few\nissues with them:\n\n- in v10 CFLAGS_SL is not appended to the CXXFLAGS in Makefile.shlib,\nresulting in cpp files compiled without -fPIC, leading to errors when\ncreating the shared library out of them. In v11 and above CFLAGS_SL is\nprepended to the PG_CXXFLAGS, but there are no PG_CXXFLAGS on v10, and the\nMakefile does nothing to add them to CXXFLAGS. Patch is attached.\n\n- not just with v10, when building bc files from cpp, there are no CXXFLAGS\npassed; as a result, when building a source with non-standard flags (i.e\n-std=c++11) one would get an error during building of bc files.\n\nThe rules in the Makefile.global.(in) look like:\n\nifndef COMPILE.c.bc\n# -Wno-ignored-attributes added so gnu_printf doesn't trigger\n# warnings, when the main binary is compiled with C.\nCOMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c\nendif\n\nifndef COMPILE.cxx.bc\nCOMPILE.cxx.bc = $(CLANG) -xc++ -Wno-ignored-attributes $(BITCODE_CXXFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c\nendif\n\n%.bc : %.c\n\t$(COMPILE.c.bc) -o $@ $<\n\n%.bc : %.cpp\n\t$(COMPILE.cxx.bc) -o $@ $<\n\nHowever, there seems to be no way to override BITCODE_CXXFLAGS to include\nany specific C++ compilation flags that are also required to build object\nfiles from cpp. Same applies to .bc derived from .c files with\nBITCODE_CFLAGS respectively.\n\nI am wondering if we could define something like PG_BITCODE_CXXFLAGS and\nPG_BITCODE_CFLAGS in pgxs.mk to be able to override those. If this sound\nlike a right strategy, I’ll prepare a patch.\n\nCheers,\nOleksii “Alex” Kluukin", "msg_date": "Fri, 20 Mar 2020 17:02:15 +0100", "msg_from": "Oleksii Kliukin <alexk@hintbits.com>", "msg_from_op": true, "msg_subject": "Issues with building cpp extensions on PostgreSQL 10+" }, { "msg_contents": "\nPatch applied to PG 10, thanks. I don't know about your other questions,\nbut if you want to propose a patch, we can review it. Thanks.\n\n---------------------------------------------------------------------------\n\nOn Fri, Mar 20, 2020 at 05:02:15PM +0100, Oleksii Kliukin wrote:\n> Hello,\n> \n> I’ve recently tried to build an extension that employs C++ files and also\n> passes them to a linker to make a shared library. I’ve discovered a few\n> issues with them:\n> \n> - in v10 CFLAGS_SL is not appended to the CXXFLAGS in Makefile.shlib,\n> resulting in cpp files compiled without -fPIC, leading to errors when\n> creating the shared library out of them. In v11 and above CFLAGS_SL is\n> prepended to the PG_CXXFLAGS, but there are no PG_CXXFLAGS on v10, and the\n> Makefile does nothing to add them to CXXFLAGS. Patch is attached.\n> \n> - not just with v10, when building bc files from cpp, there are no CXXFLAGS\n> passed; as a result, when building a source with non-standard flags (i.e\n> -std=c++11) one would get an error during building of bc files.\n> \n> The rules in the Makefile.global.(in) look like:\n> \n> ifndef COMPILE.c.bc\n> # -Wno-ignored-attributes added so gnu_printf doesn't trigger\n> # warnings, when the main binary is compiled with C.\n> COMPILE.c.bc = $(CLANG) -Wno-ignored-attributes $(BITCODE_CFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c\n> endif\n> \n> ifndef COMPILE.cxx.bc\n> COMPILE.cxx.bc = $(CLANG) -xc++ -Wno-ignored-attributes $(BITCODE_CXXFLAGS) $(CPPFLAGS) -flto=thin -emit-llvm -c\n> endif\n> \n> %.bc : %.c\n> \t$(COMPILE.c.bc) -o $@ $<\n> \n> %.bc : %.cpp\n> \t$(COMPILE.cxx.bc) -o $@ $<\n> \n> However, there seems to be no way to override BITCODE_CXXFLAGS to include\n> any specific C++ compilation flags that are also required to build object\n> files from cpp. Same applies to .bc derived from .c files with\n> BITCODE_CFLAGS respectively.\n> \n> I am wondering if we could define something like PG_BITCODE_CXXFLAGS and\n> PG_BITCODE_CFLAGS in pgxs.mk to be able to override those. If this sound\n> like a right strategy, I’ll prepare a patch.\n> \n> Cheers,\n> Oleksii “Alex” Kluukin\n> \n\n> diff --git a/src/Makefile.shlib b/src/Makefile.shlib\n> index eb45daedc8..342496eecd 100644\n> --- a/src/Makefile.shlib\n> +++ b/src/Makefile.shlib\n> @@ -101,6 +101,7 @@ endif\n> # Try to keep the sections in some kind of order, folks...\n> \n> override CFLAGS += $(CFLAGS_SL)\n> +override CXXFLAGS += $(CFLAGS_SL)\n> ifdef SO_MAJOR_VERSION\n> # libraries ought to use this to refer to versioned gettext domain names\n> override CPPFLAGS += -DSO_MAJOR_VERSION=$(SO_MAJOR_VERSION)\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 31 Mar 2020 22:26:38 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Issues with building cpp extensions on PostgreSQL 10+" } ]
[ { "msg_contents": "Hi PGHackers:\r\n\r\nCurrently, correlated IN/Any subquery always gets planned as a SubPlan which leads to poor performance:\r\npostgres=# explain (costs off) select count(*) from s where s.n in (select l.n from l where l.u != s.u);\r\n QUERY PLAN\r\n------------------------------------\r\nAggregate\r\n -> Seq Scan on s\r\n Filter: (SubPlan 1)\r\n SubPlan 1\r\n -> Seq Scan on l\r\n Filter: (u <> s.u)\r\n\r\npostgres=# select count() from s where s.n in (select l.n from l where l.u != s.u);\r\nTime: 3419.466 ms (00:03.419)\r\nHowever, you can rewrite the query using exists which will be executed using join. In this example the join plan is more than 3 orders of magnitudes faster than the SubPlan:\r\npostgres=# explain (costs off) select count(*) from s where exists (select 1 from l where l.n = s.n and l.u != s.u);\r\n QUERY PLAN\r\n---------------------------------------\r\nAggregate\r\n -> Merge Semi Join\r\n Merge Cond: (s.n = l.n)\r\n Join Filter: (l.u <> s.u)\r\n -> Index Scan using s_n on s\r\n -> Index Scan using l_n on l\r\n\r\n\r\npostgres=# select count() from s where exists (select 1 from l where l.n = s.n and l.u != s.u);\r\nTime: 1.188 ms\r\n\r\n\r\nTable s has 10 rows, table l has 1, 000, 000 rows.\r\n\r\nThis patch enables correlated IN/Any subquery to be transformed to join, the transformation is allowed only when the correlated Var is in the where clause of the subquery. It covers the most common correlated cases and follows the same criteria that is followed by the correlated Exists transformation code.\r\n\r\nHere is the new query plan for the same correlated IN query:\r\npostgres=# explain (costs off) select count(*) from s where s.n in (select l.n from l where l.u != s.u);\r\nQUERY PLAN\r\n\r\nAggregate\r\n-> Merge Semi Join\r\nMerge Cond: (s.n = l.n)\r\nJoin Filter: (l.u <> s.u)\r\n-> Index Scan using s_n on s\r\n-> Index Scan using l_n on l\r\n\r\npostgres=# select count(*) from s where s.n in (select l.n from l where l.u != s.u);\r\nTime: 1.693 ms\r\n________________________________\r\nAlso the patch introduces a new GUC enable_correlated_any_transform (on by default) to guard the optimization. Test cases are included in the patch. Comments are welcome!\r\n\r\n-----------\r\nZheng Li\r\nAWS, Amazon Aurora PostgreSQL", "msg_date": "Fri, 20 Mar 2020 16:05:59 +0000", "msg_from": "\"Li, Zheng\" <zhelli@amazon.com>", "msg_from_op": true, "msg_subject": "Correlated IN/Any Subquery Transformation" }, { "msg_contents": "\"Li, Zheng\" <zhelli@amazon.com> writes:\n> This patch enables correlated IN/Any subquery to be transformed to join, the transformation is allowed only when the correlated Var is in the where clause of the subquery. It covers the most common correlated cases and follows the same criteria that is followed by the correlated Exists transformation code.\n\nIt's too late to include this in v13, but please add the patch to the\nnext commitfest so that we remember to consider it for v14.\n\nhttps://commitfest.postgresql.org\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 20 Mar 2020 12:42:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlated IN/Any Subquery Transformation" } ]
[ { "msg_contents": "I am hacking some GIST code for a research project and wanted clarification\nabout what exactly a secondary split is in GIST. More specifically I am\nwondering why the supportSecondarySplit function (which is in\nsrc/backend/access/gist/gistsplit.c) can assume that the data is currently\non the left side in order to swap it.\n\n/*\n* Clean up when we did a secondary split but the user-defined PickSplit\n* method didn't support it (leaving spl_ldatum_exists or spl_rdatum_exists\n* true).\n*\n* We consider whether to swap the left and right outputs of the secondary\n* split; this can be worthwhile if the penalty for merging those tuples into\n* the previously chosen sets is less that way.\n*\n* In any case we must update the union datums for the current column by\n* adding in the previous union keys (oldL/oldR), since the user-defined\n* PickSplit method didn't do so.\n*/\nstatic void\nsupportSecondarySplit(Relation r, GISTSTATE *giststate, int attno,\nGIST_SPLITVEC *sv, Datum oldL, Datum oldR)\n{\n\nBest,\nPeter\n\n-- \nPeter Griggs\nMasters of Engineering (Meng) in Computer Science\nMassachusetts Institute of Technology | 2020\n\nI am hacking some GIST code for a research project and wanted clarification about what exactly a secondary split is in GIST. More specifically I am wondering why the supportSecondarySplit function (which is in src/backend/access/gist/gistsplit.c) can assume that the data is currently on the left side in order to swap it./* * Clean up when we did a secondary split but the user-defined PickSplit * method didn't support it (leaving spl_ldatum_exists or spl_rdatum_exists * true). * * We consider whether to swap the left and right outputs of the secondary * split; this can be worthwhile if the penalty for merging those tuples into * the previously chosen sets is less that way. * * In any case we must update the union datums for the current column by * adding in the previous union keys (oldL/oldR), since the user-defined * PickSplit method didn't do so. */static voidsupportSecondarySplit(Relation r, GISTSTATE *giststate, int attno, GIST_SPLITVEC *sv, Datum oldL, Datum oldR){Best,Peter-- Peter GriggsMasters of Engineering (Meng) in Computer ScienceMassachusetts Institute of Technology | 2020", "msg_date": "Fri, 20 Mar 2020 14:36:02 -0700", "msg_from": "Peter Griggs <petergriggs33@gmail.com>", "msg_from_op": true, "msg_subject": "GiST secondary split" }, { "msg_contents": "Hi, Peter!\n\nOn Sat, Mar 21, 2020 at 12:36 AM Peter Griggs <petergriggs33@gmail.com> wrote:\n> I am hacking some GIST code for a research project and wanted clarification about what exactly a secondary split is in GIST.\n\nSecondary split in GiST is the split by second and subsequent columns\non multicolumn GiST indexes. In the general it works as following.\nSplit by the first column produced two union keys. It might happen\nthat some of first column values are contained in both of union keys.\nIf so, corresponding tuples are subject of secondary split.\n\n> More specifically I am wondering why the supportSecondarySplit function (which is in src/backend/access/gist/gistsplit.c) can assume that the data is currently on the left side in order to swap it.\n\nI don't think it assumes that all the data is currently on the left\nside. There is left and right sides of primary split. And in the\nsame time there is left and right sides of secondary split. The might\nunion them straight or crosswise. The name of leaveOnLeft variable\nmight be confusing. leaveOnLeft == true means straight union, while\nleaveOnLeft == false means crosswise union.\n\n------\nAlexander Korotkov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n", "msg_date": "Sat, 21 Mar 2020 14:03:37 +0300", "msg_from": "Alexander Korotkov <a.korotkov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: GiST secondary split" } ]
[ { "msg_contents": "Hey Folks!\nirc://irc.freenode.net/postgresql link is not working and I am not able to\nuse the chat option to clear some doubt. here is an ss if you require it.\n\nThanking you in advance\nAnanya Srivastava\n[image: image.png]", "msg_date": "Sat, 21 Mar 2020 20:03:36 +0530", "msg_from": "Ananya Srivastava <ananyavsrivatava@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC chat link not working" }, { "msg_contents": "Greetings,\n\n* Ananya Srivastava (ananyavsrivatava@gmail.com) wrote:\n> irc://irc.freenode.net/postgresql link is not working and I am not able to\n> use the chat option to clear some doubt. here is an ss if you require it.\n\nYou simply need a client that works with irc links to utilize that link.\n\nFor a web-based interface, you could go here:\n\nhttps://webchat.freenode.net/\n\nAnd then, after logging in, join the #postgresql channel.\n\nThanks,\n\nStephen", "msg_date": "Mon, 23 Mar 2020 18:50:54 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC chat link not working" } ]
[ { "msg_contents": "Hey Hackers,\n\nWe had a database running on debian where its disks went corrupted. And\ncurrently is not possible to access those disks, unreadable.\n\nFor backup we used:\n\nFile System Level Backup\nContinuous Archiving and Point-in-Time Recovery (PITR)\n\nWe used tablespaces for the entire database.\n\nWhen trying to recover from the backup we noticed that the backup was\npartially complete, here is the info:\n\nPG_VERSION 8.2.4\n\nbase we are missing the files\n\nglobal we are missing the files\n\npg_clog we are missing the files\n\npg_multixact we are missing the files\n\npg_subtrans we are missing the files\n\npg_tblspc All files are preserved there from a backup\n\npg_twophase we are missing the files\n\npg_xlog we are missing the files\n\n\nPrevious backups had the same error. We only have backups from the last 2\nweeks.\n\nWe have a very old (about 10 years old) data/\n\nWhat are the alternatives for recovery here? (besides trying to recover\ndisk data with specialists)\n\nThanks!\n\nPhillip Black.\n\nHey Hackers,We had a database running on debian where its disks went corrupted. And currently is not possible to access those disks, unreadable. For backup we used: File System Level BackupContinuous Archiving and Point-in-Time Recovery (PITR)We used tablespaces for the entire database.When trying to recover from the backup we noticed that the backup was partially complete, here is the info:PG_VERSION 8.2.4base  we are missing the filesglobal  we are missing the filespg_clog  we are missing the filespg_multixact  we are missing the filespg_subtrans  we are missing the filespg_tblspc  All files are preserved there from a backuppg_twophase  we are missing the filespg_xlog  we are missing the filesPrevious backups had the same error. We only have backups from the last 2 weeks. We have a very old (about 10 years old) data/What are the alternatives for recovery here? (besides trying to recover disk data with specialists)Thanks!Phillip Black.", "msg_date": "Sat, 21 Mar 2020 17:48:03 -0600", "msg_from": "Phillip Black <phillipvblack@gmail.com>", "msg_from_op": true, "msg_subject": "Database recovery from tablespace only" }, { "msg_contents": "On Sat, Mar 21, 2020 at 05:48:03PM -0600, Phillip Black wrote:\n> Hey Hackers,\n\nHi,\n\nThis list is for development and bug reports. I think you'll want a\nprofessional support contract, for postgres or for generic data recovery.\n\nhttps://www.postgresql.org/support/professional_support/\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Mar 2020 13:25:36 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Database recovery from tablespace only" }, { "msg_contents": "On Sat, Mar 21, 2020 at 4:48 PM Phillip Black <phillipvblack@gmail.com>\nwrote:\n\n> What are the alternatives for recovery here? (besides trying to recover\n> disk data with specialists)\n>\n\nYou are running ancient, unsupported, software on hardware that has a\nlifetime which you've probably also exceeded and are not actively testing\nbackups. Its is unlikely you are going to get a positive result without\nfinally spending some time and money by hiring a specialist. Or, just\nconsider the lost data as an expected outcome given previous behaviors and\nmove forward. Or, at minimum, someone who can sit/login to the computer\nand network setup in question and interact with it - even if they cannot do\nthe disk data recovery they can at least provide a professional opinion.\n\nDavid J.\n\nOn Sat, Mar 21, 2020 at 4:48 PM Phillip Black <phillipvblack@gmail.com> wrote:What are the alternatives for recovery here? (besides trying to recover disk data with specialists)You are running ancient, unsupported, software on hardware that has a lifetime which you've probably also exceeded and are not actively testing backups.  Its is unlikely you are going to get a positive result without finally spending some time and money by hiring a specialist.  Or, just consider the lost data as an expected outcome given previous behaviors and move forward.  Or, at minimum, someone who can sit/login to the computer and network setup in question and interact with it - even if they cannot do the disk data recovery they can at least provide a professional opinion.David J.", "msg_date": "Sun, 22 Mar 2020 12:17:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database recovery from tablespace only" } ]
[ { "msg_contents": "Original, long thread\nhttps://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n\ndiff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml\nindex ab1b8c2398..140637983a 100644\n--- a/doc/src/sgml/ref/vacuum.sgml\n+++ b/doc/src/sgml/ref/vacuum.sgml\n@@ -237,15 +237,15 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n <term><literal>PARALLEL</literal></term>\n <listitem>\n <para>\n- Perform vacuum index and cleanup index phases of <command>VACUUM</command>\n+ Perform vacuum index and index cleanup phases of <command>VACUUM</command>\n in parallel using <replaceable class=\"parameter\">integer</replaceable>\n- background workers (for the detail of each vacuum phases, please\n+ background workers (for the detail of each vacuum phase, please\n refer to <xref linkend=\"vacuum-phases\"/>). If the\n- <literal>PARALLEL</literal> option is omitted, then\n- <command>VACUUM</command> decides the number of workers based on number\n- of indexes that support parallel vacuum operation on the relation which\n- is further limited by <xref linkend=\"guc-max-parallel-workers-maintenance\"/>.\n- The index can participate in a parallel vacuum if and only if the size\n+ <literal>PARALLEL</literal> option is omitted, then the number of workers\n+ is determined based on the number of indexes that support parallel vacuum\n+ operation on the relation, and is further limited by <xref\n+ linkend=\"guc-max-parallel-workers-maintenance\"/>.\n+ An index can participate in parallel vacuum if and only if the size\n of the index is more than <xref linkend=\"guc-min-parallel-index-scan-size\"/>.\n Please note that it is not guaranteed that the number of parallel workers\n specified in <replaceable class=\"parameter\">integer</replaceable> will\n@@ -253,7 +253,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n workers than specified, or even with no workers at all. Only one worker\n can be used per index. So parallel workers are launched only when there\n are at least <literal>2</literal> indexes in the table. Workers for\n- vacuum launches before starting each phase and exit at the end of\n+ vacuum are launched before the start of each phase and terminate at the end of\n the phase. These behaviors might change in a future release. This\n option can't be used with the <literal>FULL</literal> option.\n </para>\n@@ -372,7 +372,7 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n <command>VACUUM</command> causes a substantial increase in I/O traffic,\n which might cause poor performance for other active sessions. Therefore,\n it is sometimes advisable to use the cost-based vacuum delay feature. For\n- parallel vacuum, each worker sleeps proportional to the work done by that\n+ parallel vacuum, each worker sleeps proportionally to the work done by that\n worker. See <xref linkend=\"runtime-config-resource-vacuum-cost\"/> for\n details.\n </para>\n\ndiff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\nindex e017db4446..0ab0bea312 100644\n--- a/src/backend/access/heap/vacuumlazy.c\n+++ b/src/backend/access/heap/vacuumlazy.c\n@@ -194,7 +194,7 @@ typedef struct LVShared\n \t * live tuples in the index vacuum case or the new live tuples in the\n \t * index cleanup case.\n \t *\n-\t * estimated_count is true if the reltuples is an estimated value.\n+\t * estimated_count is true if reltuples is an estimated value.\n \t */\n \tdouble\t\treltuples;\n \tbool\t\testimated_count;\n@@ -757,7 +757,7 @@ skip_blocks(Relation onerel, VacuumParams *params, BlockNumber *next_unskippable\n *\t\tto reclaim dead line pointers.\n *\n *\t\tIf the table has at least two indexes, we execute both index vacuum\n- *\t\tand index cleanup with parallel workers unless the parallel vacuum is\n+ *\t\tand index cleanup with parallel workers unless parallel vacuum is\n *\t\tdisabled. In a parallel vacuum, we enter parallel mode and then\n *\t\tcreate both the parallel context and the DSM segment before starting\n *\t\theap scan so that we can record dead tuples to the DSM segment. All\n@@ -836,7 +836,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n \tvacrelstats->latestRemovedXid = InvalidTransactionId;\n \n \t/*\n-\t * Initialize the state for a parallel vacuum. As of now, only one worker\n+\t * Initialize state for a parallel vacuum. As of now, only one worker\n \t * can be used for an index, so we invoke parallelism only if there are at\n \t * least two indexes on a table.\n \t */\n@@ -864,7 +864,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n \t}\n \n \t/*\n-\t * Allocate the space for dead tuples in case the parallel vacuum is not\n+\t * Allocate the space for dead tuples in case parallel vacuum is not\n \t * initialized.\n \t */\n \tif (!ParallelVacuumIsActive(lps))\n@@ -2111,7 +2111,7 @@ parallel_vacuum_index(Relation *Irel, IndexBulkDeleteResult **stats,\n \t\tshared_indstats = get_indstats(lvshared, idx);\n \n \t\t/*\n-\t\t * Skip processing indexes that doesn't participate in parallel\n+\t\t * Skip processing indexes that don't participate in parallel\n \t\t * operation\n \t\t */\n \t\tif (shared_indstats == NULL ||\n@@ -2223,7 +2223,7 @@ vacuum_one_index(Relation indrel, IndexBulkDeleteResult **stats,\n \t\tshared_indstats->updated = true;\n \n \t\t/*\n-\t\t * Now that the stats[idx] points to the DSM segment, we don't need\n+\t\t * Now that stats[idx] points to the DSM segment, we don't need\n \t\t * the locally allocated results.\n \t\t */\n \t\tpfree(*stats);\n@@ -2329,7 +2329,7 @@ lazy_vacuum_index(Relation indrel, IndexBulkDeleteResult **stats,\n *\tlazy_cleanup_index() -- do post-vacuum cleanup for one index relation.\n *\n *\t\treltuples is the number of heap tuples and estimated_count is true\n- *\t\tif the reltuples is an estimated value.\n+ *\t\tif reltuples is an estimated value.\n */\n static void\n lazy_cleanup_index(Relation indrel,\n@@ -2916,9 +2916,9 @@ heap_page_is_all_visible(Relation rel, Buffer buf,\n /*\n * Compute the number of parallel worker processes to request. Both index\n * vacuum and index cleanup can be executed with parallel workers. The index\n- * is eligible for parallel vacuum iff it's size is greater than\n+ * is eligible for parallel vacuum iff its size is greater than\n * min_parallel_index_scan_size as invoking workers for very small indexes\n- * can hurt the performance.\n+ * can hurt performance.\n *\n * nrequested is the number of parallel workers that user requested. If\n * nrequested is 0, we compute the parallel degree based on nindexes, that is\n@@ -2937,7 +2937,7 @@ compute_parallel_vacuum_workers(Relation *Irel, int nindexes, int nrequested,\n \tint\t\t\ti;\n \n \t/*\n-\t * We don't allow to perform parallel operation in standalone backend or\n+\t * We don't allow performing parallel operation in standalone backend or\n \t * when parallelism is disabled.\n \t */\n \tif (!IsUnderPostmaster || max_parallel_maintenance_workers == 0)\n@@ -3010,7 +3010,7 @@ prepare_index_statistics(LVShared *lvshared, bool *can_parallel_vacuum,\n }\n \n /*\n- * Update index statistics in pg_class if the statistics is accurate.\n+ * Update index statistics in pg_class if the statistics are accurate.\n */\n static void\n update_index_statistics(Relation *Irel, IndexBulkDeleteResult **stats,\n@@ -3181,7 +3181,7 @@ begin_parallel_vacuum(Oid relid, Relation *Irel, LVRelStats *vacrelstats,\n /*\n * Destroy the parallel context, and end parallel mode.\n *\n- * Since writes are not allowed during the parallel mode, so we copy the\n+ * Since writes are not allowed during parallel mode, copy the\n * updated index statistics from DSM in local memory and then later use that\n * to update the index statistics. One might think that we can exit from\n * parallel mode, update the index statistics and then destroy parallel\n@@ -3288,7 +3288,7 @@ skip_parallel_vacuum_index(Relation indrel, LVShared *lvshared)\n * Perform work within a launched parallel process.\n *\n * Since parallel vacuum workers perform only index vacuum or index cleanup,\n- * we don't need to report the progress information.\n+ * we don't need to report progress information.\n */\n void\n parallel_vacuum_main(dsm_segment *seg, shm_toc *toc)\ndiff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c\nindex df06e7d174..ac348b312c 100644\n--- a/src/backend/access/transam/parallel.c\n+++ b/src/backend/access/transam/parallel.c\n@@ -493,7 +493,7 @@ ReinitializeParallelDSM(ParallelContext *pcxt)\n \n /*\n * Reinitialize parallel workers for a parallel context such that we could\n- * launch the different number of workers. This is required for cases where\n+ * launch a different number of workers. This is required for cases where\n * we need to reuse the same DSM segment, but the number of workers can\n * vary from run-to-run.\n */\ndiff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\nindex d625d17bf4..76d33b1ba2 100644\n--- a/src/backend/commands/vacuum.c\n+++ b/src/backend/commands/vacuum.c\n@@ -2034,23 +2034,23 @@ vacuum_delay_point(void)\n /*\n * Computes the vacuum delay for parallel workers.\n *\n- * The basic idea of a cost-based vacuum delay for parallel vacuum is to allow\n- * each worker to sleep proportional to the work done by it. We achieve this\n+ * The basic idea of a cost-based delay for parallel vacuum is to force\n+ * each worker to sleep in proportion to the share of work it's done. We achieve this\n * by allowing all parallel vacuum workers including the leader process to\n * have a shared view of cost related parameters (mainly VacuumCostBalance).\n- * We allow each worker to update it as and when it has incurred any cost and\n+ * We allow each worker to update it AS AND WHEN it has incurred any cost and\n * then based on that decide whether it needs to sleep. We compute the time\n * to sleep for a worker based on the cost it has incurred\n * (VacuumCostBalanceLocal) and then reduce the VacuumSharedCostBalance by\n- * that amount. This avoids letting the workers sleep who have done less or\n- * no I/O as compared to other workers and therefore can ensure that workers\n- * who are doing more I/O got throttled more.\n+ * that amount. This avoids putting to sleep those workers which have done less\n+ * I/O than other workers and therefore ensure that workers\n+ * which are doing more I/O got throttled more.\n *\n- * We allow any worker to sleep only if it has performed the I/O above a\n+ * We force a worker to sleep only if it has performed I/O above a\n * certain threshold, which is calculated based on the number of active\n * workers (VacuumActiveNWorkers), and the overall cost balance is more than\n- * VacuumCostLimit set by the system. The testing reveals that we achieve\n- * the required throttling if we allow a worker that has done more than 50%\n+ * VacuumCostLimit set by the system. Testing reveals that we achieve\n+ * the required throttling if we force a worker that has done more than 50%\n * of its share of work to sleep.\n */\n static double\ndiff --git a/src/include/commands/vacuum.h b/src/include/commands/vacuum.h\nindex 2779bea5c9..a4cd721400 100644\n--- a/src/include/commands/vacuum.h\n+++ b/src/include/commands/vacuum.h\n@@ -225,7 +225,7 @@ typedef struct VacuumParams\n \n \t/*\n \t * The number of parallel vacuum workers. 0 by default which means choose\n-\t * based on the number of indexes. -1 indicates a parallel vacuum is\n+\t * based on the number of indexes. -1 indicates parallel vacuum is\n \t * disabled.\n \t */\n \tint\t\t\tnworkers;\n-- \n2.17.0", "msg_date": "Sat, 21 Mar 2020 21:18:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "doc review for parallel vacuum" }, { "msg_contents": "On Sun, Mar 22, 2020 at 7:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Original, long thread\n> https://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n>\n\nI have comments/questions on the patches:\ndoc changes\n-------------------------\n1.\n <para>\n- Perform vacuum index and cleanup index phases of\n<command>VACUUM</command>\n+ Perform vacuum index and index cleanup phases of\n<command>VACUUM</command>\n\nWhy is the proposed text improvement over what is already there?\n\n2.\nIf the\n- <literal>PARALLEL</literal> option is omitted, then\n- <command>VACUUM</command> decides the number of workers based on number\n- of indexes that support parallel vacuum operation on the relation which\n- is further limited by <xref\nlinkend=\"guc-max-parallel-workers-maintenance\"/>.\n- The index can participate in a parallel vacuum if and only if the size\n+ <literal>PARALLEL</literal> option is omitted, then the number of workers\n+ is determined based on the number of indexes that support parallel vacuum\n+ operation on the relation, and is further limited by <xref\n+ linkend=\"guc-max-parallel-workers-maintenance\"/>.\n+ An index can participate in parallel vacuum if and only if the size\n of the index is more than <xref\nlinkend=\"guc-min-parallel-index-scan-size\"/>.\n\nHere, it is not clear to me why the proposed text is better than what\nwe already have?\n\n3.\n..\n- vacuum launches before starting each phase and exit at the end of\n+ vacuum are launched before the start of each phase and\nterminate at the end of\n\nIt is better to use 'exit' instead of 'ternimate' as we are not\nforcing the workers to end rather we wait for them to exit.\n\n\n\nPatch changes\n-------------------------\n1.\n- * and index cleanup with parallel workers unless the parallel vacuum is\n+ * and index cleanup with parallel workers unless parallel vacuum is\n\nAs per my understanding 'parallel vacuum' is a noun phrase, so we\nshould have a determiner before it.\n\n2.\n- * Since writes are not allowed during the parallel mode, so we copy the\n+ * Since writes are not allowed during parallel mode, copy the\n\nSimilar to above, I think here also we should have a determiner before\n'parallel mode'.\n\n3.\n- * The basic idea of a cost-based vacuum delay for parallel vacuum is to allow\n- * each worker to sleep proportional to the work done by it. We achieve this\n+ * The basic idea of a cost-based delay for parallel vacuum is to force\n+ * each worker to sleep in proportion to the share of work it's done.\nWe achieve this\n\nI am not sure if it is an improvement to use 'to force' instead of 'to\nallow' because there is another criteria as well which decides whether\nthe worker will sleep or not. I am also not sure if the second change\n(share of work it's) in this sentence is a clear improvement.\n\n4.\n- * We allow each worker to update it as and when it has incurred any cost and\n+ * We allow each worker to update it AS AND WHEN it has incurred any cost and\n\nI don't see why it is necessary to make this part bold? We are using\nit at one other place in the code in the way it is used here.\n\n5.\n- * We allow any worker to sleep only if it has performed the I/O above a\n+ * We force a worker to sleep only if it has performed I/O above a\n * certain threshold\n\nHmm, again I am not sure if we should use 'force' here instead of 'allow'.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:34:37 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Mon, Mar 23, 2020 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Mar 22, 2020 at 7:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Original, long thread\n> > https://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n> >\n>\n> I have comments/questions on the patches:\n> doc changes\n> -------------------------\n> 1.\n> <para>\n> - Perform vacuum index and cleanup index phases of\n> <command>VACUUM</command>\n> + Perform vacuum index and index cleanup phases of\n> <command>VACUUM</command>\n>\n> Why is the proposed text improvement over what is already there?\n>\n\nI have kept the existing text as it is for this case.\n\n> 2.\n> If the\n> - <literal>PARALLEL</literal> option is omitted, then\n> - <command>VACUUM</command> decides the number of workers based on number\n> - of indexes that support parallel vacuum operation on the relation which\n> - is further limited by <xref\n> linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> - The index can participate in a parallel vacuum if and only if the size\n> + <literal>PARALLEL</literal> option is omitted, then the number of workers\n> + is determined based on the number of indexes that support parallel vacuum\n> + operation on the relation, and is further limited by <xref\n> + linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> + An index can participate in parallel vacuum if and only if the size\n> of the index is more than <xref\n> linkend=\"guc-min-parallel-index-scan-size\"/>.\n>\n> Here, it is not clear to me why the proposed text is better than what\n> we already have?\n>\n\nChanged as per your suggestion.\n\n> 3.\n> ..\n> - vacuum launches before starting each phase and exit at the end of\n> + vacuum are launched before the start of each phase and\n> terminate at the end of\n>\n> It is better to use 'exit' instead of 'ternimate' as we are not\n> forcing the workers to end rather we wait for them to exit.\n>\n\nI have used 'exit' instead of 'terminate' here.\n\n>\n>\n> Patch changes\n> -------------------------\n> 1.\n> - * and index cleanup with parallel workers unless the parallel vacuum is\n> + * and index cleanup with parallel workers unless parallel vacuum is\n>\n> As per my understanding 'parallel vacuum' is a noun phrase, so we\n> should have a determiner before it.\n>\n> 2.\n> - * Since writes are not allowed during the parallel mode, so we copy the\n> + * Since writes are not allowed during parallel mode, copy the\n>\n> Similar to above, I think here also we should have a determiner before\n> 'parallel mode'.\n>\n\nChanged as per your suggestion.\n\n> 3.\n> - * The basic idea of a cost-based vacuum delay for parallel vacuum is to allow\n> - * each worker to sleep proportional to the work done by it. We achieve this\n> + * The basic idea of a cost-based delay for parallel vacuum is to force\n> + * each worker to sleep in proportion to the share of work it's done.\n> We achieve this\n>\n> I am not sure if it is an improvement to use 'to force' instead of 'to\n> allow' because there is another criteria as well which decides whether\n> the worker will sleep or not. I am also not sure if the second change\n> (share of work it's) in this sentence is a clear improvement.\n>\n\nI have used 'to allow' in above text, otherwise, acceptted your suggestions.\n\n> 4.\n> - * We allow each worker to update it as and when it has incurred any cost and\n> + * We allow each worker to update it AS AND WHEN it has incurred any cost and\n>\n> I don't see why it is necessary to make this part bold? We are using\n> it at one other place in the code in the way it is used here.\n>\n\nKept the existing text as it is.\n\n> 5.\n> - * We allow any worker to sleep only if it has performed the I/O above a\n> + * We force a worker to sleep only if it has performed I/O above a\n> * certain threshold\n>\n> Hmm, again I am not sure if we should use 'force' here instead of 'allow'.\n>\n\nKept the usage of 'allow'.\n\nI have combined both of your patches. Let me know if you have any\nmore suggestions, otherwise, I am planning to push this tomorrow.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Tue, 7 Apr 2020 09:57:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Tue, Apr 07, 2020 at 09:57:46AM +0530, Amit Kapila wrote:\n> On Mon, Mar 23, 2020 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Sun, Mar 22, 2020 at 7:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> > > Original, long thread\n> > > https://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n> > >\n> >\n> > I have comments/questions on the patches:\n> > doc changes\n> > -------------------------\n> > 1.\n> > <para>\n> > - Perform vacuum index and cleanup index phases of <command>VACUUM</command>\n> > + Perform vacuum index and index cleanup phases of <command>VACUUM</command>\n> >\n> > Why is the proposed text improvement over what is already there?\n> \n> I have kept the existing text as it is for this case.\n\nProbably it should say \"index vacuum and index cleanup\", which is also what the\ncomment in vacuumlazy.c says.\n\n> > 2.\n> > If the\n> > - <literal>PARALLEL</literal> option is omitted, then\n> > - <command>VACUUM</command> decides the number of workers based on number\n> > - of indexes that support parallel vacuum operation on the relation which\n> > - is further limited by <xref\n> > linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > - The index can participate in a parallel vacuum if and only if the size\n> > + <literal>PARALLEL</literal> option is omitted, then the number of workers\n> > + is determined based on the number of indexes that support parallel vacuum\n> > + operation on the relation, and is further limited by <xref\n> > + linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > + An index can participate in parallel vacuum if and only if the size\n> > of the index is more than <xref\n> > linkend=\"guc-min-parallel-index-scan-size\"/>.\n> >\n> > Here, it is not clear to me why the proposed text is better than what\n> > we already have?\n> Changed as per your suggestion.\n\nTo answer your question:\n\n\"VACUUM decides\" doesn't sound consistent with docs.\n\n\"based on {+the+} number\"\n=> here, you're veritably missing an article...\n\n\"relation which\" technically means that the *relation* is \"is further limited\nby\"...\n\n> > Patch changes\n> > -------------------------\n> > 1.\n> > - * and index cleanup with parallel workers unless the parallel vacuum is\n> > + * and index cleanup with parallel workers unless parallel vacuum is\n> >\n> > As per my understanding 'parallel vacuum' is a noun phrase, so we\n> > should have a determiner before it.\n> \n> Changed as per your suggestion.\n\nThanks (I think you meant an \"article\").\n\n> > - * We allow each worker to update it as and when it has incurred any cost and\n> > + * We allow each worker to update it AS AND WHEN it has incurred any cost and\n> >\n> > I don't see why it is necessary to make this part bold? We are using\n> > it at one other place in the code in the way it is used here.\n> >\n> \n> Kept the existing text as it is.\n\nI meant this as a question. I'm not sure what \"as and when means\" ? \"If and\nwhen\" ?\n\n> I have combined both of your patches. Let me know if you have any\n> more suggestions, otherwise, I am planning to push this tomorrow.\n\nIn the meantime, I found some more issues, so I rebased on top of your patch so\nyou can review it.\n\n-- \nJustin", "msg_date": "Mon, 6 Apr 2020 23:55:51 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Tue, Apr 7, 2020 at 10:25 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Apr 07, 2020 at 09:57:46AM +0530, Amit Kapila wrote:\n> > On Mon, Mar 23, 2020 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Sun, Mar 22, 2020 at 7:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > Original, long thread\n> > > > https://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n> > > >\n> > >\n> > > I have comments/questions on the patches:\n> > > doc changes\n> > > -------------------------\n> > > 1.\n> > > <para>\n> > > - Perform vacuum index and cleanup index phases of <command>VACUUM</command>\n> > > + Perform vacuum index and index cleanup phases of <command>VACUUM</command>\n> > >\n> > > Why is the proposed text improvement over what is already there?\n> >\n> > I have kept the existing text as it is for this case.\n>\n> Probably it should say \"index vacuum and index cleanup\", which is also what the\n> comment in vacuumlazy.c says.\n>\n\nOkay, that makes sense.\n\n>\n> > > - * We allow each worker to update it as and when it has incurred any cost and\n> > > + * We allow each worker to update it AS AND WHEN it has incurred any cost and\n> > >\n> > > I don't see why it is necessary to make this part bold? We are using\n> > > it at one other place in the code in the way it is used here.\n> > >\n> >\n> > Kept the existing text as it is.\n>\n> I meant this as a question. I'm not sure what \"as and when means\" ? \"If and\n> when\" ?\n>\n\nIt means the \"at the time when\" worker performed any I/O. This has\nbeen used at some other place in code as well.\n\n> > I have combined both of your patches. Let me know if you have any\n> > more suggestions, otherwise, I am planning to push this tomorrow.\n>\n> In the meantime, I found some more issues, so I rebased on top of your patch so\n> you can review it.\n>\n\n- The <option>PARALLEL</option> option is used only for vacuum purpose.\n- Even if this option is specified with <option>ANALYZE</option> option\n- it does not affect <option>ANALYZE</option>.\n+ The <option>PARALLEL</option> option is used only for vacuum operations.\n+ If specified along with <option>ANALYZE</option>, the behavior during\n+ <literal>ANALYZE</literal> is unchanged.\n\nI think the proposed text makes the above text unclear especially \"the\nbehavior during ANALYZE is unchanged.\". Basically this option has\nnothing to do with the behavior of vacuum or analyze. I think we\nshould be more specific as the current text.\n\n * Copy the index bulk-deletion result returned from ambulkdelete and\n- * amvacuumcleanup to the DSM segment if it's the first time to get it\n- * from them, because they allocate it locally and it's possible that an\n- * index will be vacuumed by the different vacuum process at the next\n- * time. The copying of the result normally happens only after the first\n- * time of index vacuuming. From the second time, we pass the result on\n- * the DSM segment so that they then update it directly.\n+ * amvacuumcleanup to the DSM segment if it's the first time to get a result\n+ * from a worker, because workers allocate BulkDeleteResults locally,\nand it's possible that an\n+ * index will be vacuumed by a different vacuum process the next\n+ * time.\n\nThis can be done by the leader backend as well, so we can't use\nworkers terminology here. Also, I don't see the need to mention\nBulkDeleteResults. I will include some changes from this text.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Apr 2020 09:40:27 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Tue, 7 Apr 2020 at 13:55, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Tue, Apr 07, 2020 at 09:57:46AM +0530, Amit Kapila wrote:\n> > On Mon, Mar 23, 2020 at 10:34 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > > On Sun, Mar 22, 2020 at 7:48 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > > >\n> > > > Original, long thread\n> > > > https://www.postgresql.org/message-id/flat/CAA4eK1%2Bnw1FBK3_sDnW%2B7kB%2Bx4qbDJqetgqwYW8k2xv82RZ%2BKw%40mail.gmail.com#b1745ee853b137043e584b500b41300f\n> > > >\n> > >\n> > > I have comments/questions on the patches:\n> > > doc changes\n> > > -------------------------\n> > > 1.\n> > > <para>\n> > > - Perform vacuum index and cleanup index phases of <command>VACUUM</command>\n> > > + Perform vacuum index and index cleanup phases of <command>VACUUM</command>\n> > >\n> > > Why is the proposed text improvement over what is already there?\n> >\n> > I have kept the existing text as it is for this case.\n>\n> Probably it should say \"index vacuum and index cleanup\", which is also what the\n> comment in vacuumlazy.c says.\n>\n> > > 2.\n> > > If the\n> > > - <literal>PARALLEL</literal> option is omitted, then\n> > > - <command>VACUUM</command> decides the number of workers based on number\n> > > - of indexes that support parallel vacuum operation on the relation which\n> > > - is further limited by <xref\n> > > linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > > - The index can participate in a parallel vacuum if and only if the size\n> > > + <literal>PARALLEL</literal> option is omitted, then the number of workers\n> > > + is determined based on the number of indexes that support parallel vacuum\n> > > + operation on the relation, and is further limited by <xref\n> > > + linkend=\"guc-max-parallel-workers-maintenance\"/>.\n> > > + An index can participate in parallel vacuum if and only if the size\n> > > of the index is more than <xref\n> > > linkend=\"guc-min-parallel-index-scan-size\"/>.\n> > >\n> > > Here, it is not clear to me why the proposed text is better than what\n> > > we already have?\n> > Changed as per your suggestion.\n>\n> To answer your question:\n>\n> \"VACUUM decides\" doesn't sound consistent with docs.\n>\n> \"based on {+the+} number\"\n> => here, you're veritably missing an article...\n>\n> \"relation which\" technically means that the *relation* is \"is further limited\n> by\"...\n>\n> > > Patch changes\n> > > -------------------------\n> > > 1.\n> > > - * and index cleanup with parallel workers unless the parallel vacuum is\n> > > + * and index cleanup with parallel workers unless parallel vacuum is\n> > >\n> > > As per my understanding 'parallel vacuum' is a noun phrase, so we\n> > > should have a determiner before it.\n> >\n> > Changed as per your suggestion.\n>\n> Thanks (I think you meant an \"article\").\n>\n> > > - * We allow each worker to update it as and when it has incurred any cost and\n> > > + * We allow each worker to update it AS AND WHEN it has incurred any cost and\n> > >\n> > > I don't see why it is necessary to make this part bold? We are using\n> > > it at one other place in the code in the way it is used here.\n> > >\n> >\n> > Kept the existing text as it is.\n>\n> I meant this as a question. I'm not sure what \"as and when means\" ? \"If and\n> when\" ?\n>\n> > I have combined both of your patches. Let me know if you have any\n> > more suggestions, otherwise, I am planning to push this tomorrow.\n>\n> In the meantime, I found some more issues, so I rebased on top of your patch so\n> you can review it.\n>\n\nI don't have comments on your change other than the comments Amit\nalready sent. Thank you for reviewing this part!\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 8 Apr 2020 16:19:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Wed, Apr 8, 2020 at 12:49 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Tue, 7 Apr 2020 at 13:55, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n>\n> I don't have comments on your change other than the comments Amit\n> already sent. Thank you for reviewing this part!\n>\n\nI have made the modifications as per my comments. What do you think\nabout the attached?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Fri, 10 Apr 2020 12:56:08 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Fri, 10 Apr 2020 at 16:26, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Apr 8, 2020 at 12:49 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Apr 2020 at 13:55, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> >\n> > I don't have comments on your change other than the comments Amit\n> > already sent. Thank you for reviewing this part!\n> >\n>\n> I have made the modifications as per my comments. What do you think\n> about the attached?\n\nThank you for updating the patch! Looks good to me.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Apr 2020 16:47:41 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Fri, Apr 10, 2020 at 12:56:08PM +0530, Amit Kapila wrote:\n> On Wed, Apr 8, 2020 at 12:49 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Tue, 7 Apr 2020 at 13:55, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > >\n> >\n> > I don't have comments on your change other than the comments Amit\n> > already sent. Thank you for reviewing this part!\n> >\n> \n> I have made the modifications as per my comments. What do you think\n> about the attached?\n\nCouple more changes (in bold):\n\n- The <option>PARALLEL</option> option is used only for vacuum PURPOSES.\n- Even if this option is specified with THE <option>ANALYZE</option> option\n\nAlso, this part still doesn't read well:\n\n- * amvacuumcleanup to the DSM segment if it's the first time to get it?\n- * from them? because they? allocate it locally and it's possible that an\n- * index will be vacuumed by the different vacuum process at the next\n\nIf you change \"it\" and \"them\" and \"it\" and say \"*a* different\", then it'll be\nok.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 10 Apr 2020 08:46:44 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Fri, Apr 10, 2020 at 7:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Also, this part still doesn't read well:\n>\n> - * amvacuumcleanup to the DSM segment if it's the first time to get it?\n> - * from them? because they? allocate it locally and it's possible that an\n> - * index will be vacuumed by the different vacuum process at the next\n>\n> If you change \"it\" and \"them\" and \"it\" and say \"*a* different\", then it'll be\n> ok.\n>\n\nI am not sure if I follow how exactly you want to change it but still\nlet me know what you think about if we change it like: \"Copy the index\nbulk-deletion result returned from ambulkdelete and amvacuumcleanup to\nthe DSM segment if it's the first time because they allocate locally\nand it's possible that an index will be vacuumed by the different\nvacuum process at the next time.\"\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 13 Apr 2020 10:44:42 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Mon, Apr 13, 2020 at 10:44:42AM +0530, Amit Kapila wrote:\n> On Fri, Apr 10, 2020 at 7:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > Also, this part still doesn't read well:\n> >\n> > - * amvacuumcleanup to the DSM segment if it's the first time to get it?\n> > - * from them? because they? allocate it locally and it's possible that an\n> > - * index will be vacuumed by the different vacuum process at the next\n> >\n> > If you change \"it\" and \"them\" and \"it\" and say \"*a* different\", then it'll be\n> > ok.\n> >\n> \n> I am not sure if I follow how exactly you want to change it but still\n> let me know what you think about if we change it like: \"Copy the index\n> bulk-deletion result returned from ambulkdelete and amvacuumcleanup to\n> the DSM segment if it's the first time because they allocate locally\n> and it's possible that an index will be vacuumed by the different\n> vacuum process at the next time.\"\n\nI changed \"the\" to \"a\" and removed \"at\":\n\n|Copy the index\n|bulk-deletion result returned from ambulkdelete and amvacuumcleanup to\n|the DSM segment if it's the first time [???] because they allocate locally\n|and it's possible that an index will be vacuumed by a different\n|vacuum process the next time.\"\n\nIs it correct to say: \"..if it's the first iteration\" and \"different process on\nthe next iteration\" ? Or \"cycle\" ?\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Apr 2020 03:30:15 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Mon, Apr 13, 2020 at 2:00 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> |Copy the index\n> |bulk-deletion result returned from ambulkdelete and amvacuumcleanup to\n> |the DSM segment if it's the first time [???] because they allocate locally\n> |and it's possible that an index will be vacuumed by a different\n> |vacuum process the next time.\"\n>\n> Is it correct to say: \"..if it's the first iteration\" and \"different process on\n> the next iteration\" ? Or \"cycle\" ?\n>\n\n\"cycle\" sounds better. I have changed the patch as per your latest\ncomments. Let me know what you think?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Mon, 13 Apr 2020 15:22:06 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Mon, Apr 13, 2020 at 03:22:06PM +0530, Amit Kapila wrote:\n> On Mon, Apr 13, 2020 at 2:00 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > |Copy the index\n> > |bulk-deletion result returned from ambulkdelete and amvacuumcleanup to\n> > |the DSM segment if it's the first time [???] because they allocate locally\n> > |and it's possible that an index will be vacuumed by a different\n> > |vacuum process the next time.\"\n> >\n> > Is it correct to say: \"..if it's the first iteration\" and \"different process on\n> > the next iteration\" ? Or \"cycle\" ?\n> >\n> \n> \"cycle\" sounds better. I have changed the patch as per your latest\n> comments. Let me know what you think?\n\nLooks good. One more change:\n\n[-Even if-]{+If+} this option is specified with the <option>ANALYZE</option> [-option-]{+option,+}\n\nRemove \"even\" and add comma.\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Mon, 13 Apr 2020 16:24:40 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: doc review for parallel vacuum" }, { "msg_contents": "On Tue, Apr 14, 2020 at 2:54 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Looks good. One more change:\n>\n> [-Even if-]{+If+} this option is specified with the <option>ANALYZE</option> [-option-]{+option,+}\n>\n> Remove \"even\" and add comma.\n>\n\nPushed after making this change.\n\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 14 Apr 2020 08:23:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: doc review for parallel vacuum" } ]
[ { "msg_contents": "Hi All,\n\nIt is known, that collation \"C\" significantly speeds up string comparisons and as a result sorting. I was wondering, whether it is possible to use it regardless of collation set on a column in sorts not visible to users?\n\nExample I have in mind is sorting performed for GroupAggregate. Purpose of that sort is to bring equal values next to each other, so as long as:\n 1) user didn't request ORDER BY in addition to GROUP BY\n 2) source column has any deterministic collation (as per docs all builtin collations are deterministic)\n\nit seems to be possible to do sorting with any deterministic collation, regardless of what user specifid for the column being sorted. \"C\" collation is deterministic and fastest.\n\nIn other words, couldn't PostgreSQL convert this:\n-> GroupAggregate (cost=15726557.87..22944558.69 rows=7200001 width=176) (actual time=490103.209..771536.389 rows=36000000 loops=1)\n Group Key: ec_180days.msn, ec_180days.to_date_time\n -> Sort (cost=15726557.87..15906557.89 rows=72000008 width=113) (actual time=490094.849..524854.662 rows=72000000 loops=1)\n Sort Key: ec_180days.msn, ec_180days.to_date_time\n Sort Method: external merge Disk: 7679136kB\n\nTo this:\n-> GroupAggregate (cost=14988274.87..22206275.69 rows=7200001 width=155) (actual time=140497.729..421510.001 rows=36000000 loops=1)\n\n Group Key: ec_180days.msn, ec_180days.to_date_time\n\n -> Sort (cost=14988274.87..15168274.89 rows=72000008 width=92) (actual time=140489.807..174228.722 rows=72000000 loops=1)\n\n Sort Key: ec_180days.msn COLLATE \"C\", ec_180days.to_date_time\n\n Sort Method: external merge Disk: 7679136kB\n\nwhich is 3 times faster in my tests.\nHi All,It is known, that  collation \"C\" significantly speeds up string comparisons and as a result sorting. I was wondering, whether it is possible to use it regardless of collation set on a column in sorts not visible to users? Example I have in  mind is sorting performed for GroupAggregate. Purpose of that sort is to bring equal values next to each other, so as long as:   1) user didn't request ORDER BY in addition to GROUP BY   2) source column has any deterministic collation (as per docs all builtin collations are deterministic)it seems to be possible to do sorting with any deterministic collation, regardless of what user specifid for the column being sorted. \"C\" collation is deterministic and fastest.In other words, couldn't PostgreSQL convert this:->  GroupAggregate  (cost=15726557.87..22944558.69 rows=7200001 width=176) (actual time=490103.209..771536.389 rows=36000000 loops=1)      Group Key: ec_180days.msn, ec_180days.to_date_time      ->  Sort  (cost=15726557.87..15906557.89 rows=72000008 width=113) (actual time=490094.849..524854.662 rows=72000000 loops=1)            Sort Key: ec_180days.msn, ec_180days.to_date_time            Sort Method: external merge  Disk: 7679136kBTo this:->  GroupAggregate  (cost=14988274.87..22206275.69 rows=7200001 width=155) (actual time=140497.729..421510.001 rows=36000000 loops=1)      Group Key: ec_180days.msn, ec_180days.to_date_time      ->  Sort  (cost=14988274.87..15168274.89 rows=72000008 width=92) (actual time=140489.807..174228.722 rows=72000000 loops=1)            Sort Key: ec_180days.msn COLLATE \"C\", ec_180days.to_date_time            Sort Method: external merge  Disk: 7679136kBwhich is 3 times faster in my tests.", "msg_date": "Sun, 22 Mar 2020 09:11:45 +0000", "msg_from": "Maxim Ivanov <hi@yamlcoder.me>", "msg_from_op": true, "msg_subject": "optimisation? collation \"C\" sorting for GroupAggregate for all\n deterministic collations" }, { "msg_contents": "Hi\n\nne 22. 3. 2020 v 10:12 odesílatel Maxim Ivanov <hi@yamlcoder.me> napsal:\n\n> Hi All,\n>\n> It is known, that collation \"C\" significantly speeds up string\n> comparisons and as a result sorting. I was wondering, whether it is\n> possible to use it regardless of collation set on a column in sorts not\n> visible to users?\n>\n> Example I have in mind is sorting performed for GroupAggregate. Purpose\n> of that sort is to bring equal values next to each other, so as long as:\n> 1) user didn't request ORDER BY in addition to GROUP BY\n> 2) source column has any deterministic collation (as per docs all\n> builtin collations are deterministic)\n>\n> it seems to be possible to do sorting with any deterministic collation,\n> regardless of what user specifid for the column being sorted. \"C\" collation\n> is deterministic and fastest.\n>\n> In other words, couldn't PostgreSQL convert this:\n>\n> -> GroupAggregate (cost=15726557.87..22944558.69 rows=7200001 width=176)\n> (actual time=490103.209..771536.389 rows=36000000 loops=1)\n> Group Key: ec_180days.msn, ec_180days.to_date_time\n> -> Sort (cost=15726557.87..15906557.89 rows=72000008 width=113)\n> (actual time=490094.849..524854.662 rows=72000000 loops=1)\n> Sort Key: ec_180days.msn, ec_180days.to_date_time\n> Sort Method: external merge Disk: 7679136kB\n>\n> To this:\n>\n> -> GroupAggregate (cost=14988274.87..22206275.69 rows=7200001 width=155)\n> (actual time=140497.729..421510.001 rows=36000000 loops=1)\n> Group Key: ec_180days.msn, ec_180days.to_date_time\n> -> Sort (cost=14988274.87..15168274.89 rows=72000008 width=92)\n> (actual time=140489.807..174228.722 rows=72000000 loops=1)\n> Sort Key: ec_180days.msn COLLATE \"C\", ec_180days.to_date_time\n> Sort Method: external merge Disk: 7679136kB\n>\n>\n> which is 3 times faster in my tests.\n>\n\nI had a same idea. It is possible only if default collation is\ndeterministic. Probably it will be less important if abbreviate sort will\nbe enabled, but it is disabled now.\n\np.s. can be interesting repeat your tests with ICU locale where abbreviate\nsort is enabled.\n\nRegards\n\nPavel\n\nHine 22. 3. 2020 v 10:12 odesílatel Maxim Ivanov <hi@yamlcoder.me> napsal:Hi All,It is known, that  collation \"C\" significantly speeds up string comparisons and as a result sorting. I was wondering, whether it is possible to use it regardless of collation set on a column in sorts not visible to users? Example I have in  mind is sorting performed for GroupAggregate. Purpose of that sort is to bring equal values next to each other, so as long as:   1) user didn't request ORDER BY in addition to GROUP BY   2) source column has any deterministic collation (as per docs all builtin collations are deterministic)it seems to be possible to do sorting with any deterministic collation, regardless of what user specifid for the column being sorted. \"C\" collation is deterministic and fastest.In other words, couldn't PostgreSQL convert this:->  GroupAggregate  (cost=15726557.87..22944558.69 rows=7200001 width=176) (actual time=490103.209..771536.389 rows=36000000 loops=1)      Group Key: ec_180days.msn, ec_180days.to_date_time      ->  Sort  (cost=15726557.87..15906557.89 rows=72000008 width=113) (actual time=490094.849..524854.662 rows=72000000 loops=1)            Sort Key: ec_180days.msn, ec_180days.to_date_time            Sort Method: external merge  Disk: 7679136kBTo this:->  GroupAggregate  (cost=14988274.87..22206275.69 rows=7200001 width=155) (actual time=140497.729..421510.001 rows=36000000 loops=1)      Group Key: ec_180days.msn, ec_180days.to_date_time      ->  Sort  (cost=14988274.87..15168274.89 rows=72000008 width=92) (actual time=140489.807..174228.722 rows=72000000 loops=1)            Sort Key: ec_180days.msn COLLATE \"C\", ec_180days.to_date_time            Sort Method: external merge  Disk: 7679136kBwhich is 3 times faster in my tests.I had a same idea. It is possible only if default collation is deterministic. Probably it will be less important if abbreviate sort will be enabled, but it is disabled now.p.s. can be interesting repeat your tests with ICU locale where abbreviate sort is enabled.RegardsPavel", "msg_date": "Sun, 22 Mar 2020 10:32:58 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: optimisation? collation \"C\" sorting for GroupAggregate for all\n deterministic collations" }, { "msg_contents": "On Sun, Mar 22, 2020 at 5:33 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> ne 22. 3. 2020 v 10:12 odesílatel Maxim Ivanov <hi@yamlcoder.me> napsal:\n>>\n>> Hi All,\n>>\n>> It is known, that collation \"C\" significantly speeds up string comparisons and as a result sorting. I was wondering, whether it is possible to use it regardless of collation set on a column in sorts not visible to users?\n>>\n>> Example I have in mind is sorting performed for GroupAggregate. Purpose of that sort is to bring equal values next to each other, so as long as:\n>> 1) user didn't request ORDER BY in addition to GROUP BY\n>> 2) source column has any deterministic collation (as per docs all builtin collations are deterministic)\n>>\n>> it seems to be possible to do sorting with any deterministic collation, regardless of what user specifid for the column being sorted. \"C\" collation is deterministic and fastest.\n>>\n>> In other words, couldn't PostgreSQL convert this:\n>>\n>> -> GroupAggregate (cost=15726557.87..22944558.69 rows=7200001 width=176) (actual time=490103.209..771536.389 rows=36000000 loops=1)\n>> Group Key: ec_180days.msn, ec_180days.to_date_time\n>> -> Sort (cost=15726557.87..15906557.89 rows=72000008 width=113) (actual time=490094.849..524854.662 rows=72000000 loops=1)\n>> Sort Key: ec_180days.msn, ec_180days.to_date_time\n>> Sort Method: external merge Disk: 7679136kB\n>>\n>> To this:\n>>\n>> -> GroupAggregate (cost=14988274.87..22206275.69 rows=7200001 width=155) (actual time=140497.729..421510.001 rows=36000000 loops=1)\n>> Group Key: ec_180days.msn, ec_180days.to_date_time\n>> -> Sort (cost=14988274.87..15168274.89 rows=72000008 width=92) (actual time=140489.807..174228.722 rows=72000000 loops=1)\n>> Sort Key: ec_180days.msn COLLATE \"C\", ec_180days.to_date_time\n>> Sort Method: external merge Disk: 7679136kB\n>>\n>>\n>> which is 3 times faster in my tests.\n>\n>\n> I had a same idea. It is possible only if default collation is deterministic. Probably it will be less important if abbreviate sort will be enabled, but it is disabled now.\n>\n> p.s. can be interesting repeat your tests with ICU locale where abbreviate sort is enabled.\n\nPerhaps this is what you mean by \"deterministic\", but isn't it\npossible for some collations to treat multiple byte sequences as equal\nvalues? And those multiple byte sequences wouldn't necessarily occur\nsequentially in C collation, so it wouldn't be possible to work around\nthat by having the grouping node use one collation but the sorting\nnode use the C one.\n\nIf my memory is incorrect, then this sounds like an intriguing idea.\n\nJames\n\n\n", "msg_date": "Sun, 22 Mar 2020 17:28:05 -0400", "msg_from": "James Coleman <jtc331@gmail.com>", "msg_from_op": false, "msg_subject": "Re: optimisation? collation \"C\" sorting for GroupAggregate for all\n deterministic collations" }, { "msg_contents": ">\n> Perhaps this is what you mean by \"deterministic\", but isn't it\n> possible for some collations to treat multiple byte sequences as equal\n> values? And those multiple byte sequences wouldn't necessarily occur\n> sequentially in C collation, so it wouldn't be possible to work around\n> that by having the grouping node use one collation but the sorting\n> node use the C one.\n>\n> If my memory is incorrect, then this sounds like an intriguing idea.\n>\n>\nI could see the value in a hash aggregate on C-collation that then passes\nitself as a partial aggregate up to another step which applies the\ncollation and then finalizes the aggregation before sorting\n\nPerhaps this is what you mean by \"deterministic\", but isn't it\npossible for some collations to treat multiple byte sequences as equal\nvalues? And those multiple byte sequences wouldn't necessarily occur\nsequentially in C collation, so it wouldn't be possible to work around\nthat by having the grouping node use one collation but the sorting\nnode use the C one.\n\nIf my memory is incorrect, then this sounds like an intriguing idea.I could see the value in a hash aggregate on C-collation that then passes itself as a partial aggregate up to another step which applies the collation and then finalizes the aggregation before sorting", "msg_date": "Sun, 22 Mar 2020 21:00:41 -0400", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: optimisation? collation \"C\" sorting for GroupAggregate for all\n deterministic collations" }, { "msg_contents": "\n> Perhaps this is what you mean by \"deterministic\", but isn't it\n> possible for some collations to treat multiple byte sequences as equal\n> values? And those multiple byte sequences wouldn't necessarily occur\n> sequentially in C collation, so it wouldn't be possible to work around\n> that by having the grouping node use one collation but the sorting\n> node use the C one.\n>\n> If my memory is incorrect, then this sounds like an intriguing idea.\n\n\nYes, as per doc (https://www.postgresql.org/docs/12/collation.html#COLLATION-NONDETERMINISTIC) some collations can result in symbols(chars? codes? runes?) to be equal, while their byte representations is not. This optimisation should check for source table collation and do not change sorting collation if columns being sorted use non deterministic collation.\n\nLuckily in practice it is probably to be very rare, all builtin collations are deterministic.\n\n\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:41:15 +0000", "msg_from": "Maxim Ivanov <hi@yamlcoder.me>", "msg_from_op": true, "msg_subject": "Re: optimisation? collation \"C\" sorting for GroupAggregate for all\n deterministic collations" } ]
[ { "msg_contents": "Hi All,\n\nWe found that Postgres12 doesn't support ASLR. Attached the process explorer screenshot (Process_Explorer_ASLR.png).\n\nAnalyzing dumpbin headers of postgres looks like the /HIGHENTROPYVA flag set and not the /DYNAMICBASE flag(dumpbin_headers.txt). According to this link<https://github.com/MicrosoftDocs/cpp-docs/issues/282>, resulting image will not have ASLR enabled.\n\nWindows has a feature to force randomization of images (Mandatory ASLR for those images which have not been compiled with /DYNAMICBASE).\nEnabling this also didn't have any effect.\n\nThe base addresses of postgres in Process Explorer doesn't change upon restart (Postgres_Imagebase.png).\n\nWe would like to know if there is a roadmap to enable ASLR support for postgre.\n\nLet us know if you need more information.\n\nRegards,\nJoel", "msg_date": "Mon, 23 Mar 2020 03:27:42 +0000", "msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>", "msg_from_op": true, "msg_subject": "ASLR support for Postgres12" }, { "msg_contents": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com> writes:\n> We would like to know if there is a roadmap to enable ASLR support for postgre.\n\nNot on Windows --- since that OS doesn't support fork(), it's too\ndifficult to get different child processes to map shared memory\nat the same address if ASLR is active.\n\nIf that feature is important to you, use a different operating\nsystem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:45:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ASLR support for Postgres12" }, { "msg_contents": "Thanks Tom for the quick Update. \n\nCan you please point me to a link or give the list of OSes where ASLR is officially supported by Postgres?\n\nRegards,\nJoel\n\n-----Original Message-----\nFrom: Tom Lane <tgl@sss.pgh.pa.us> \nSent: Monday, March 23, 2020 8:16 PM\nTo: Joel Mariadasan (jomariad) <jomariad@cisco.com>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: ASLR support for Postgres12\n\n\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com> writes:\n> We would like to know if there is a roadmap to enable ASLR support for postgre.\n\nNot on Windows --- since that OS doesn't support fork(), it's too difficult to get different child processes to map shared memory at the same address if ASLR is active.\n\nIf that feature is important to you, use a different operating system.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 15:28:22 +0000", "msg_from": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com>", "msg_from_op": true, "msg_subject": "RE: ASLR support for Postgres12" }, { "msg_contents": "\"Joel Mariadasan (jomariad)\" <jomariad@cisco.com> writes:\n> Can you please point me to a link or give the list of OSes where ASLR is officially supported by Postgres?\n\nEverything except Windows.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:04:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ASLR support for Postgres12" } ]
[ { "msg_contents": "Greetings.\n\nThis thread is a follow-up thread for [1], where I submit a patch for\nerasing the\ndistinct node if we have known the data is unique for sure. But since the\nimplementation has changed a lot from the beginning and they are not very\nrelated, so I start this new thread to discuss the new strategy to save the\ntime\nof reviewers.\n\nAs I said above, my original intention is just used to erase the distinct\nclause,\nthen Tom Lane suggested function query_is_distinct_for, I found the\nuniqueness\ncan be used for costing, remove_useless_join, reduce_unqiue_semijoins.\nDavid suggested to maintain the uniqueness from bottom to top, like join\n& subqueries, group-by, distinct, union and so on(we call it as UniqueKey).\n\nIdeally the uniqueness will be be lost in any case. This current\nimplementation\nfollows the David's suggestion and also thanks Ashutosh who reminded me\nthe cost should be ok while I had concerns of this at the beginning.\n\nA new field named uniquekeys was added in RelOptInfo struct, which is a\nlist of UniqueKey struct.\n\ntypedef struct UniqueKey\n{\n NodeTag type;\n List *exprs;\n List *positions;\n bool grantee;\n} UniqueKey;\n\nexprs is a list of exprs which is unique if we don't care about the null\nvaues on\ncurrent RelOptInfo.\n\npositions is a list of the sequence no. of the exprs in the current\nRelOptInfo,\nwhich is used for SubQuery. like\n\ncreate table t1 (a int primary key, b int);\ncreate table t2 (a int primary key, b int);\nselect .. from t1, (select b from t2 group by t2) t2 ..;\n\nThe UniqueKey for the subquery will be Var(varno=1, varattno=2), but for\nthe\ntop query, the UniqueKey of t2 should be Var(varno=2, varattrno=1), the 1\nhere\nneed to be calculated by UnqiueKey->positions.\n\ngrantee field is introduced mainly for remove_useless_join &\nreduce_unique_semijions.\nTake the above case for example:\n\n-- b is nullable. so select b from t2 still can result in duplicated rows.\ncreate unique index t2_uk_b on t2(b);\n\n-- the left join still can be removed since t2.b is a unique index and the\nnullable\ndoesn't matter here.\nselect t1.* from t1 left join t2 on (t1.b = t2.b);\n\nso t2.b will still be an UniqueKey for t2, just that the grantee = false.\n\nA branch of functions like populate_xxx_unqiuekeys for manipulating\nuniquekeys\nfor a lot of cases, xxx maybe baserel, joinrel, paritioned table, unionrel,\ngroupbyrel,\ndistincrel and so on. partitioned table has some not obviously troubles\ndue to\nusers can create index on the childrel directly and differently. You can\ncheck\nthe comments of the code for details.\n\n\nWhen maintaining the uniquekeys of joinrel, we have a rule that if both\nrels have\nUniqueKeys, then any combination from the 2 sides is a unqiquekey of the\njoinrel.\nI used two algorithms to keep the length of the UniqueKeys short. One is\nwe only\nadd useful UniqueKey to the RelOptInfo.uniquekeys. If the expr isn't shown\nin\nrel->reltargets->exprs, it will not be used for others, so we can ignore it\nsafely.\nThe another one is if column sets A is unqiuekey already, any superset of\nA\nwill no need to be added as an UnqiueKey.\n\n\nThe overall cost of the maintaining unqiuekeys should be ok. If you check\nthe code,\nyou may find there are many 2 or 3 levels foreach, but most of them are\nstarted with\nunique index, and I used UnqiueKeyContext and SubqueryUnqiueKeyContext in\njoinrel\nand subquery case to avoid too many loops.\n\nNow I have used the UnqiueKey to erase the unnecessary distinct/group by,\nand also changed\nthe rel_is_distinct_for to use UnqiueKeys. so we can handle more cases.\n\ncreate table m1 (a int primary key, b int, c int);\ncreate table m2 (a int primary key, b int, c int);\ncreate table m3 (a int primary key, b int, c int);\n\nWit the current patch, we can get:\ntask3=# explain select t1.a from m3 t1 left join (select m1.a from m1, m2\nwhere m1.b = m2.a limit 1) t2 on (t1.a = t2.a);\n QUERY PLAN\n---------------------------------------------------------\n Seq Scan on m3 t1 (cost=0.00..32.60 rows=2260 width=4)\n\n\nBefore the patch, we will get:\npostgres=# explain select t1.a from m3 t1 left join (select m1.a from m1,\nm2 where m1.b = m2.a limit 1) t2 on (t1.a = t2.a)\npostgres-# ;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Hash Left Join (cost=0.39..41.47 rows=2260 width=4)\n Hash Cond: (t1.a = m1.a)\n -> Seq Scan on m3 t1 (cost=0.00..32.60 rows=2260 width=4)\n -> Hash (cost=0.37..0.37 rows=1 width=4)\n -> Limit (cost=0.15..0.36 rows=1 width=4)\n -> Nested Loop (cost=0.15..470.41 rows=2260 width=4)\n -> Seq Scan on m1 (cost=0.00..32.60 rows=2260\nwidth=8)\n -> Index Only Scan using m2_pkey on m2\n (cost=0.15..0.19 rows=1 width=4)\n Index Cond: (a = m1.b)\n\n\nThe \"limit 1\" here is just want to avoid the pull_up_subquery to pull up\nthe subquery,\nI think we may still have opportunities to improve this further if we check\nif we can\nremove a join *just before we join 2 relations*. we may have the similar\nsituation\nfor reduce_unique_semijions joins. After the changes has been done, we can\nremove\nthe \"limit 1\" here to show the diffidence. I didn't include this change in\ncurrent patch\nsince I think the effort may be not small and I want to keep this patch\nsimple.\n\nSome known issues needs attentions:\n1. I didn't check the collation at the whole stage, one reason is the\nrelation_has_unique_index_for\n doesn't check it as well. The other reason if a in collation A is unique,\nand then A in collation B is\nunique as well, we can ignore it. [2]\n2. Current test case contrib/postgres_fdw/sql/postgres_fdw.sql is still\nfailed. I am not sure if\nthe bug is in my patch or not.\n\nKindly waiting for your feedback, Thanks you!\n\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com#f5d97cc66b9cd330add2fbb004a4d107\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com\n\n\n\nBest regards\nAndy Fan", "msg_date": "Mon, 23 Mar 2020 18:21:38 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Mon, Mar 23, 2020 at 6:21 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Greetings.\n>\n> This thread is a follow-up thread for [1], where I submit a patch for\n> erasing the\n> distinct node if we have known the data is unique for sure. But since the\n> implementation has changed a lot from the beginning and they are not very\n> related, so I start this new thread to discuss the new strategy to save\n> the time\n> of reviewers.\n>\n> As I said above, my original intention is just used to erase the distinct\n> clause,\n> then Tom Lane suggested function query_is_distinct_for, I found the\n> uniqueness\n> can be used for costing, remove_useless_join, reduce_unqiue_semijoins.\n> David suggested to maintain the uniqueness from bottom to top, like join\n> & subqueries, group-by, distinct, union and so on(we call it as\n> UniqueKey).\n> Ideally the uniqueness will be be lost in any case. This current\n> implementation\n> follows the David's suggestion and also thanks Ashutosh who reminded me\n> the cost should be ok while I had concerns of this at the beginning.\n>\n> A new field named uniquekeys was added in RelOptInfo struct, which is a\n> list of UniqueKey struct.\n>\n> typedef struct UniqueKey\n> {\n> NodeTag type;\n> List *exprs;\n> List *positions;\n> bool grantee;\n> } UniqueKey;\n>\n> exprs is a list of exprs which is unique if we don't care about the null\n> vaues on\n> current RelOptInfo.\n>\n> positions is a list of the sequence no. of the exprs in the current\n> RelOptInfo,\n> which is used for SubQuery. like\n>\n> create table t1 (a int primary key, b int);\n> create table t2 (a int primary key, b int);\n> select .. from t1, (select b from t2 group by t2) t2 ..;\n>\n> The UniqueKey for the subquery will be Var(varno=1, varattno=2), but for\n> the\n> top query, the UniqueKey of t2 should be Var(varno=2, varattrno=1), the 1\n> here\n> need to be calculated by UnqiueKey->positions.\n>\n> grantee field is introduced mainly for remove_useless_join &\n> reduce_unique_semijions.\n> Take the above case for example:\n>\n> -- b is nullable. so select b from t2 still can result in duplicated\n> rows.\n> create unique index t2_uk_b on t2(b);\n>\n> -- the left join still can be removed since t2.b is a unique index and the\n> nullable\n> doesn't matter here.\n> select t1.* from t1 left join t2 on (t1.b = t2.b);\n>\n> so t2.b will still be an UniqueKey for t2, just that the grantee = false.\n>\n> A branch of functions like populate_xxx_unqiuekeys for manipulating\n> uniquekeys\n> for a lot of cases, xxx maybe baserel, joinrel, paritioned table,\n> unionrel, groupbyrel,\n> distincrel and so on. partitioned table has some not obviously troubles\n> due to\n> users can create index on the childrel directly and differently. You can\n> check\n> the comments of the code for details.\n>\n>\n> When maintaining the uniquekeys of joinrel, we have a rule that if both\n> rels have\n> UniqueKeys, then any combination from the 2 sides is a unqiquekey of the\n> joinrel.\n> I used two algorithms to keep the length of the UniqueKeys short. One is\n> we only\n> add useful UniqueKey to the RelOptInfo.uniquekeys. If the expr isn't\n> shown in\n> rel->reltargets->exprs, it will not be used for others, so we can ignore\n> it safely.\n> The another one is if column sets A is unqiuekey already, any superset of\n> A\n> will no need to be added as an UnqiueKey.\n>\n>\n> The overall cost of the maintaining unqiuekeys should be ok. If you check\n> the code,\n> you may find there are many 2 or 3 levels foreach, but most of them are\n> started with\n> unique index, and I used UnqiueKeyContext and SubqueryUnqiueKeyContext in\n> joinrel\n> and subquery case to avoid too many loops.\n>\n> Now I have used the UnqiueKey to erase the unnecessary distinct/group by,\n> and also changed\n> the rel_is_distinct_for to use UnqiueKeys. so we can handle more cases.\n>\n> create table m1 (a int primary key, b int, c int);\n> create table m2 (a int primary key, b int, c int);\n> create table m3 (a int primary key, b int, c int);\n>\n> Wit the current patch, we can get:\n> task3=# explain select t1.a from m3 t1 left join (select m1.a from m1,\n> m2 where m1.b = m2.a limit 1) t2 on (t1.a = t2.a);\n> QUERY PLAN\n> ---------------------------------------------------------\n> Seq Scan on m3 t1 (cost=0.00..32.60 rows=2260 width=4)\n>\n>\n> Before the patch, we will get:\n> postgres=# explain select t1.a from m3 t1 left join (select m1.a from\n> m1, m2 where m1.b = m2.a limit 1) t2 on (t1.a = t2.a)\n> postgres-# ;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------\n> Hash Left Join (cost=0.39..41.47 rows=2260 width=4)\n> Hash Cond: (t1.a = m1.a)\n> -> Seq Scan on m3 t1 (cost=0.00..32.60 rows=2260 width=4)\n> -> Hash (cost=0.37..0.37 rows=1 width=4)\n> -> Limit (cost=0.15..0.36 rows=1 width=4)\n> -> Nested Loop (cost=0.15..470.41 rows=2260 width=4)\n> -> Seq Scan on m1 (cost=0.00..32.60 rows=2260\n> width=8)\n> -> Index Only Scan using m2_pkey on m2\n> (cost=0.15..0.19 rows=1 width=4)\n> Index Cond: (a = m1.b)\n>\n>\n> The \"limit 1\" here is just want to avoid the pull_up_subquery to pull up\n> the subquery,\n> I think we may still have opportunities to improve this further if we\n> check if we can\n> remove a join *just before we join 2 relations*. we may have the similar\n> situation\n> for reduce_unique_semijions joins. After the changes has been done, we\n> can remove\n> the \"limit 1\" here to show the diffidence. I didn't include this change\n> in current patch\n> since I think the effort may be not small and I want to keep this patch\n> simple.\n>\n> Some known issues needs attentions:\n> 1. I didn't check the collation at the whole stage, one reason is the\n> relation_has_unique_index_for\n> doesn't check it as well. The other reason if a in collation A is\n> unique, and then A in collation B is\n> unique as well, we can ignore it. [2]\n> 2. Current test case contrib/postgres_fdw/sql/postgres_fdw.sql is still\n> failed. I am not sure if\n> the bug is in my patch or not.\n>\n> Kindly waiting for your feedback, Thanks you!\n>\n>\n> [1]\n> https://www.postgresql.org/message-id/flat/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com#f5d97cc66b9cd330add2fbb004a4d107\n>\n> [2]\n> https://www.postgresql.org/message-id/CAKU4AWqOORqW900O-%2BL4L2%2B0xknsEqpfcs9FF7SeiO9TmpeZOg%40mail.gmail.com\n>\n>\n\nJust update the patch which do some test case changes.\n1. add \"ANALYZE\" command before running the explain.\n2. order by with an explicit collate settings.\n3. As for the postgres_fdw.sql, I just copied the results.out to\nexpected.out,\nthat's should be correct based on the result. However I added my comment\naround that.\n\nNow suppose the cbfot should pass this time.\n\nBest Regards.\nAndy Fan", "msg_date": "Wed, 25 Mar 2020 22:24:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Because I replied the old thread, cfbot run a test based on the old patch\non that thread. I have detached the old thread from commitfest. Reply\nthis\nemail again to wake up Mr. cfbot with the right information.", "msg_date": "Thu, 26 Mar 2020 09:55:35 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": ">\n>\n>> Just update the patch which do some test case changes.\n> 1. add \"ANALYZE\" command before running the explain.\n> 2. order by with an explicit collate settings.\n>\n\nThanks Rushabh for pointing this out, or else I'd spend much more time to\nfigure\nout why I get a different order on Windows.\n\n3. As for the postgres_fdw.sql, I just copied the results.out to\n> expected.out,\n> that's should be correct based on the result. However I added my comment\n> around that.\n>\n> The issue doesn't exist at all. The confusion was introduced by a\nmisunderstanding\nof the test case (I treated count (xx) filter (xxx) as a window function\nrather than an aggration\nfunction). so just fixed the it cleanly.\n\nSome other changes made in the new patch:\n1. Fixed bug for UniqueKey calculation for OUTER join.\n2. Fixed some typo error in comments.\n3. Renamed the field \"grantee\" as \"guarantee\".\n\nBest Regards\nAndy Fan", "msg_date": "Sun, 29 Mar 2020 15:48:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sun, 29 Mar 2020 at 20:50, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> Some other changes made in the new patch:\n> 1. Fixed bug for UniqueKey calculation for OUTER join.\n> 2. Fixed some typo error in comments.\n> 3. Renamed the field \"grantee\" as \"guarantee\".\n\nI've had a look over this patch. Thank for you doing further work on it.\n\nI've noted down the following during my read of the code:\n\n1. There seem to be some cases where joins are no longer being\ndetected as unique. This is evident in postgres_fdw.out. We shouldn't\nbe regressing any of these cases.\n\n2. The following change does not seem like it should be part of this\npatch. I understand you perhaps have done as you think it will\nimprove the performance of checking if an expression is in a list of\nexpressions.\n\n- COMPARE_SCALAR_FIELD(varno);\n+ /* Compare varattno first since it has higher selectivity than varno */\n COMPARE_SCALAR_FIELD(varattno);\n+ COMPARE_SCALAR_FIELD(varno);\n\nIf you think that is true, then please do it as a separate effort and\nprovide benchmarks with your findings.\n\n3. list_all_members_in. I think this would be better named as\nlist_is_subset. Please follow the lead of bms_is_subset().\nAdditionally, you should Assert that IsPointerList is true as there's\nnothing else to indicate that it can't be used for an int or Oid list.\n\n4. guarantee is not a very good name for the field in UniqueKey.\nMaybe something like is_not_null?\n\n5. I think you should be performing a bms_del_member during join\nremoval rather than removing this Assert()\n\n- Assert(bms_equal(rel->relids, root->all_baserels));\n\nFWIW, it's far from perfect that you've needed to delay the left join\nremoval, but I do understand why you've done it. It's also far from\nperfect that you're including removed relations in the\ntotal_table_pages calculation. c6e4133fae1 took some measures to\nimprove this calculation and this is making it worse again.\n\n6. Can you explain why you moved the populate_baserel_uniquekeys()\ncall out of set_plain_rel_size()?\n\n7. I don't think the removal of rel_supports_distinctness() is\nwarranted. Is it not ok to check if the relation has any uniquekeys?\nIt's possible, particularly in join_is_removable that this can save\nquite a large amount of effort.\n\n8. Your spelling of unique is incorrect in many places:\n\nsrc/backend/nodes/makefuncs.c: * makeUnqiueKey\nsrc/backend/optimizer/path/uniquekeys.c:static List\n*initililze_unqiuecontext_for_joinrel(RelOptInfo *joinrel,\nsrc/backend/optimizer/path/uniquekeys.c: * check if combination of\nunqiuekeys from both side is still useful for us,\nsrc/backend/optimizer/path/uniquekeys.c: outerrel_uniquekey_ctx\n= initililze_unqiuecontext_for_joinrel(joinrel, outerrel);\nsrc/backend/optimizer/path/uniquekeys.c: innerrel_uniquekey_ctx\n= initililze_unqiuecontext_for_joinrel(joinrel, innerrel);\nsrc/backend/optimizer/path/uniquekeys.c: * we need to convert the\nUnqiueKey from sub_final_rel to currel via the positions info in\nsrc/backend/optimizer/path/uniquekeys.c: ctx->pos =\npos; /* the position in current targetlist, will be used to set\nUnqiueKey */\nsrc/backend/optimizer/path/uniquekeys.c: * Check if Unqiue key of the\ninnerrel is valid after join. innerrel's UniqueKey\nsrc/backend/optimizer/path/uniquekeys.c: * initililze_unqiuecontext_for_joinrel\nsrc/backend/optimizer/path/uniquekeys.c: * all the unqiuekeys which\nare not possible to use later\nsrc/backend/optimizer/path/uniquekeys.c:initililze_unqiuecontext_for_joinrel(RelOptInfo\n*joinrel, RelOptInfo *inputrel)\nsrc/backend/optimizer/plan/analyzejoins.c: /*\nThis UnqiueKey is what we want */\nsrc/backend/optimizer/plan/planner.c: /* If we the result if unqiue\nalready, we just return the input_rel directly */\nsrc/include/nodes/pathnodes.h: * exprs is a list of exprs which is\nunqiue on current RelOptInfo.\nsrc/test/regress/expected/join.out:-- XXXX: since b.id is unqiue now\nso the group by cluase is erased, so\nsrc/test/regress/expected/select_distinct.out:-- create unqiue index on dist_p\nsrc/test/regress/expected/select_distinct.out:-- we also support\ncreate unqiue index on each child tables\nsrc/test/regress/sql/join.sql:-- XXXX: since b.id is unqiue now so the\ngroup by cluase is erased, so\nsrc/test/regress/sql/select_distinct.sql:-- create unqiue index on dist_p\nsrc/test/regress/sql/select_distinct.sql:-- we also support create\nunqiue index on each child tables\n\n9. A few things wrong with the following fragment:\n\n/* set the not null info now */\nListCell *lc;\nforeach(lc, find_nonnullable_vars(qual))\n{\nVar *var = lfirst_node(Var, lc);\nRelOptInfo *rel = root->simple_rel_array[var->varno];\nif (var->varattno > InvalidAttrNumber)\nrel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);\n}\n\na. including a function call in the foreach macro is not a practise\nthat we really follow. It's true that the macro now assigns the 2nd\nparam to a variable. Previous to 1cff1b95ab6 this was not the case and\nit's likely best not to leave any bad examples around that code which\nmight get backported might follow.\nb. We generally subtract InvalidAttrNumber from varattno when\nincluding in a Bitmapset.\nc. not_null_cols is not well named. I think notnullattrs\nd. not_null_cols should not be a Relids type, it should be Bitmapset.\n\n10. add_uniquekey_for_onerow() seems pretty wasteful. Is there really\na need to add each item in the rel's targetlist to the uniquekey list?\nWhat if we just add an empty list to the unique keys, that way if we\nneed to test if some expr is a superset of any uniquekey, then we'll\nsee it is as any set is a superset of an empty set. Likely the empty\nset of uniquekeys should be the only one in the rel's uniquekey list.\n\n11. In create_distinct_paths() the code is now calling\nget_sortgrouplist_exprs() multiple times with the same input. I think\nit would be better to just call it once and set the result in a local\nvariable.\n\n12. The comment in the code below is not true. The List contains\nLists, of which contain UniqueKeys\n\nList *uniquekeys; /* List of UniqueKey */\n\n13. I'm having trouble parsing the final sentence in:\n\n+ * can only guarantee the uniqueness without considering the null values. This\n+ * field is necessary for remove_useless_join & reduce_unique_semijions since\n+ * these cases don't care about the null values.\n\nWhy is the field which stores the nullability of the key required for\ncode that does not care about the nullability of the key?\n\nAlso please check your spelling of the word \"join\"\n\n14. In the following fragment, instead of using i+1, please assign the\nFormData_pg_attribute to a variable named attr and use attr->attnum.\nAlso, please see what I mentioned above about subtracting\nInvalidAttrNumber\n\n+ rel->not_null_cols = bms_add_member(rel->not_null_cols, i+1);\n\n15. The tests you've changed the expected outcome of in join.out\nshould be updated so that the GROUP BY and DISTINCT clause is not\nremoved. This will allow the test to continue testing what it was\nintended to test. You can do this by changing the columns in the GROUP\nBY clause so that the new code does not find uniquekeys for those\ncolumns.\n\n16. The tests in aggregates.out are in a similar situation. There are\nvarious tests trying to ensure that remove_useless_groupby_columns()\ndoes what it's meant to do. You can modify these tests to add a join\nwhich is non-unique to effectively duplicate the PK column.\n\n17. In your select_distinct tests, can you move away from naming the\ntables starting with select_distinct? It makes reading queries pretty\nhard.\n\ne.g. explain (costs off) select distinct uk1, uk2 from\nselect_distinct_a where uk2 is not null;\n\nWhen I first glanced that, I failed to see the underscores and the\nquery looked invalid.\n\n18. Check the spelling if \"erased\". You have it spelt as \"ereased\" in\na couple of locations.\n\n19. Please pay attention to the capitalisation of SQL keywords in the\ntest files you've modified. I understand we're very inconsistent in\nthis department in general, but we do at least try not to mix\ncapitalisation within the same file. Basically, please upper case the\nkeywords in select_distinct.sql\n\n20. In addition to the above, please try to wrap long SQL lines so\nthey're below 80 chars.\n\nI'll review the patch in more detail once the above points have been addressed.\n\nDavid\n\n\n", "msg_date": "Tue, 31 Mar 2020 14:43:52 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "[ not a review, just some drive-by comments on David's comments ]\n\nDavid Rowley <dgrowleyml@gmail.com> writes:\n> 2. The following change does not seem like it should be part of this\n> patch. I understand you perhaps have done as you think it will\n> improve the performance of checking if an expression is in a list of\n> expressions.\n\n> - COMPARE_SCALAR_FIELD(varno);\n> + /* Compare varattno first since it has higher selectivity than varno */\n> COMPARE_SCALAR_FIELD(varattno);\n> + COMPARE_SCALAR_FIELD(varno);\n\n> If you think that is true, then please do it as a separate effort and\n> provide benchmarks with your findings.\n\nBy and large, I'd reject such micro-optimizations on their face.\nThe rule in the nodes/ support files is to list fields in the same\norder they're declared in. There is no chance that it's worth\ndeviating from that for this.\n\nI can believe that there'd be value in, say, comparing all\nscalar fields before all non-scalar ones. But piecemeal hacks\nwouldn't be the way to handle that either. In any case, I'd\nprefer to implement such a plan within the infrastructure to\nauto-generate these files that Andres keeps muttering about.\n\n> a. including a function call in the foreach macro is not a practise\n> that we really follow. It's true that the macro now assigns the 2nd\n> param to a variable. Previous to 1cff1b95ab6 this was not the case and\n> it's likely best not to leave any bad examples around that code which\n> might get backported might follow.\n\nNo, I think you're misremembering. foreach's second arg is\nsingle-evaluation in all branches. There were some preliminary\nversions of 1cff1b95ab6 in which it would not have been, but that\nwas sufficiently dangerous that I found a way to get rid of it.\n\n> b. We generally subtract InvalidAttrNumber from varattno when\n> including in a Bitmapset.\n\nITYM FirstLowInvalidHeapAttributeNumber, but yeah. Otherwise\nthe code fails on system columns, and there's seldom a good\nreason to risk that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 23:11:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Thanks David for your time, I will acknowledge every item you mentioned\nwith the updated patch. Now I just talk about part of them.\n\n\n> 1. There seem to be some cases where joins are no longer being\n> detected as unique. This is evident in postgres_fdw.out. We shouldn't\n> be regressing any of these cases.\n\n\nYou are correct, the issue here is I didn't distinguish the one_row case\nin UniqueKey struct. Actually when a outer relation is join with a relation\nwhich has only one row, there must be at most one row match the outer join.\nThe only-one-row case in postgres_fdw.out come from aggregation\ncall. I will added one more field \"bool onerow\" in UniqueKey struct. and\nalso try your optimization suggestion for the onerow UniqueKey.\n\n\n> 2. The following change does not seem like it should be part of this\n> patch. I understand you perhaps have done as you think it will\n> improve the performance of checking if an expression is in a list of\n> expressions.\n>\n> - COMPARE_SCALAR_FIELD(varno);\n> + /* Compare varattno first since it has higher selectivity than varno */\n> COMPARE_SCALAR_FIELD(varattno);\n> + COMPARE_SCALAR_FIELD(varno);\n>\n> I did have a hesitation when I make this changes. so I'd rollback this\nchange\nat the following patch.\n\n\n> If you think that is true, then please do it as a separate effort and\n> provide benchmarks with your findings.\n>\n> 3. list_all_members_in. I think this would be better named as\n> list_is_subset. Please follow the lead of bms_is_subset().\n> Additionally, you should Assert that IsPointerList is true as there's\n> nothing else to indicate that it can't be used for an int or Oid list.\n>\n> 4. guarantee is not a very good name for the field in UniqueKey.\n> Maybe something like is_not_null?\n>\n>\n\n> 5. I think you should be performing a bms_del_member during join\n> removal rather than removing this Assert()\n>\n> - Assert(bms_equal(rel->relids, root->all_baserels));\n>\n> FWIW, it's far from perfect that you've needed to delay the left join\n> removal, but I do understand why you've done it. It's also far from\n> perfect that you're including removed relations in the\n> total_table_pages calculation. c6e4133fae1 took some measures to\n> improve this calculation and this is making it worse again.\n>\n> Since the removed relation depends on the UniqueKey which has to be\ncalculated after total_table_pages calculation in current code, so that's\nsomething I must do. But if the relation is not removable, there is no\nwaste\nat all. If it is removable, such gain will much higher than the loss.\nI'm\nnot sure this should be a concern.\n\nActually looks the current remove_useless_join has some limits which can't\nremove a joinrel, I still didn't figure out why. In the past we have some\nlimited\nability to detect the unqiueness after join, so that's would be ok. Since\nwe have\nsuch ability now, this may be another opportunity to improve the\njoin_is_removable\nfunction, but I'd not like put such thing in this patch.\n\nSince you said \"far from perfect\" twice for this point and I only get one\nreason (we\nmay plan a node which we removed later), do I miss the other one?\n\n6. Can you explain why you moved the populate_baserel_uniquekeys()\n> call out of set_plain_rel_size()?\n>\n> This is to be consistent with populate_partitionedrel_uniquekeys, which\nis set at set_append_rel_pathlist.\n\n\n> 7. I don't think the removal of rel_supports_distinctness() is\n> warranted. Is it not ok to check if the relation has any uniquekeys?\n>\n\nI think this is a good suggestion. I will follow that.\n\n8. Your spelling of unique is incorrect in many places:\n>\n> src/backend/nodes/makefuncs.c: * makeUnqiueKey\n> src/backend/optimizer/path/uniquekeys.c:static List\n> *initililze_unqiuecontext_for_joinrel(RelOptInfo *joinrel,\n> src/backend/optimizer/path/uniquekeys.c: * check if combination of\n> unqiuekeys from both side is still useful for us,\n> src/backend/optimizer/path/uniquekeys.c: outerrel_uniquekey_ctx\n> = initililze_unqiuecontext_for_joinrel(joinrel, outerrel);\n> src/backend/optimizer/path/uniquekeys.c: innerrel_uniquekey_ctx\n> = initililze_unqiuecontext_for_joinrel(joinrel, innerrel);\n> src/backend/optimizer/path/uniquekeys.c: * we need to convert the\n> UnqiueKey from sub_final_rel to currel via the positions info in\n> src/backend/optimizer/path/uniquekeys.c: ctx->pos =\n> pos; /* the position in current targetlist, will be used to set\n> UnqiueKey */\n> src/backend/optimizer/path/uniquekeys.c: * Check if Unqiue key of the\n> innerrel is valid after join. innerrel's UniqueKey\n> src/backend/optimizer/path/uniquekeys.c: *\n> initililze_unqiuecontext_for_joinrel\n> src/backend/optimizer/path/uniquekeys.c: * all the unqiuekeys which\n> are not possible to use later\n>\n> src/backend/optimizer/path/uniquekeys.c:initililze_unqiuecontext_for_joinrel(RelOptInfo\n> *joinrel, RelOptInfo *inputrel)\n> src/backend/optimizer/plan/analyzejoins.c: /*\n> This UnqiueKey is what we want */\n> src/backend/optimizer/plan/planner.c: /* If we the result if unqiue\n> already, we just return the input_rel directly */\n> src/include/nodes/pathnodes.h: * exprs is a list of exprs which is\n> unqiue on current RelOptInfo.\n> src/test/regress/expected/join.out:-- XXXX: since b.id is unqiue now\n> so the group by cluase is erased, so\n> src/test/regress/expected/select_distinct.out:-- create unqiue index on\n> dist_p\n> src/test/regress/expected/select_distinct.out:-- we also support\n> create unqiue index on each child tables\n> src/test/regress/sql/join.sql:-- XXXX: since b.id is unqiue now so the\n> group by cluase is erased, so\n> src/test/regress/sql/select_distinct.sql:-- create unqiue index on dist_p\n> src/test/regress/sql/select_distinct.sql:-- we also support create\n> unqiue index on each child tables\n>\n> 9. A few things wrong with the following fragment:\n>\n> /* set the not null info now */\n> ListCell *lc;\n> foreach(lc, find_nonnullable_vars(qual))\n> {\n> Var *var = lfirst_node(Var, lc);\n> RelOptInfo *rel = root->simple_rel_array[var->varno];\n> if (var->varattno > InvalidAttrNumber)\n> rel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);\n> }\n>\n> a. including a function call in the foreach macro is not a practise\n> that we really follow. It's true that the macro now assigns the 2nd\n> param to a variable. Previous to 1cff1b95ab6 this was not the case and\n> it's likely best not to leave any bad examples around that code which\n> might get backported might follow.\n> b. We generally subtract InvalidAttrNumber from varattno when\n> including in a Bitmapset.\n> c. not_null_cols is not well named. I think notnullattrs\n> d. not_null_cols should not be a Relids type, it should be Bitmapset.\n>\n> If it is a Bitmapset, we have to pass it with \"&\" usually. is it our\npractice?\n\n\n> 10. add_uniquekey_for_onerow() seems pretty wasteful. Is there really\n> a need to add each item in the rel's targetlist to the uniquekey list?\n> What if we just add an empty list to the unique keys, that way if we\n> need to test if some expr is a superset of any uniquekey, then we'll\n> see it is as any set is a superset of an empty set. Likely the empty\n> set of uniquekeys should be the only one in the rel's uniquekey list.\n>\n> 11. In create_distinct_paths() the code is now calling\n> get_sortgrouplist_exprs() multiple times with the same input. I think\n> it would be better to just call it once and set the result in a local\n> variable.\n>\n> 12. The comment in the code below is not true. The List contains\n> Lists, of which contain UniqueKeys\n>\n> List *uniquekeys; /* List of UniqueKey */\n>\n> It is a list of UniqueKey, the UniqueKey can have a list of exprs.\n\n\n> 13. I'm having trouble parsing the final sentence in:\n>\n> + * can only guarantee the uniqueness without considering the null values.\n> This\n> + * field is necessary for remove_useless_join & reduce_unique_semijions\n> since\n> + * these cases don't care about the null values.\n>\n> Why is the field which stores the nullability of the key required for\n> code that does not care about the nullability of the key?\n>\n> The guarantee is introduced to for the following cases:\n\ncreate table t1 (a int primary key, b int);\ncreate table t2 (a int primary key, b int);\nselect .. from t1, (select b from t2 group by t2) t2 ..;\n\n-- b is nullable. so t2(b) can't be a normal UniqueKey (which means b may\nhave some\nduplicated rows)\ncreate unique index t2_uk_b on t2(b);\n\n-- the left join still can be removed since t2.b is a unique index and the\nnullable\ndoesn't matter here.\nselect t1.* from t1 left join t2 on (t1.b = t2.b);\n\ndo you think we have can do some optimization in this case? I don't\nunderstand\nyour question well.\n\n\n> 15. The tests you've changed the expected outcome of in join.out\n> should be updated so that the GROUP BY and DISTINCT clause is not\n> removed. This will allow the test to continue testing what it was\n> intended to test. You can do this by changing the columns in the GROUP\n> BY clause so that the new code does not find uniquekeys for those\n> columns.\n>\n\nThanks for your explanation, very impressive!\n\n\n> 16. The tests in aggregates.out are in a similar situation. There are\n> various tests trying to ensure that remove_useless_groupby_columns()\n> does what it's meant to do. You can modify these tests to add a join\n> which is non-unique to effectively duplicate the PK column.\n>\n> 17. In your select_distinct tests, can you move away from naming the\n> tables starting with select_distinct? It makes reading queries pretty\n> hard.\n>\n> e.g. explain (costs off) select distinct uk1, uk2 from\n> select_distinct_a where uk2 is not null;\n>\n> When I first glanced that, I failed to see the underscores and the\n> query looked invalid.\n>\n> 18. Check the spelling if \"erased\". You have it spelt as \"ereased\" in\n> a couple of locations.\n\n\nOK, I just installed a spell check plugin for my editor, hope it will\ncatch such\nerrors next time.\n\n\n>\n>\n19. Please pay attention to the capitalisation of SQL keywords in the\n> test files you've modified. I understand we're very inconsistent in\n> this department in general, but we do at least try not to mix\n> capitalisation within the same file. Basically, please upper case the\n> keywords in select_distinct.sql\n>\n> 20. In addition to the above, please try to wrap long SQL lines so\n> they're below 80 chars.\n>\n> I'll review the patch in more detail once the above points have been\n> addressed.\n>\n> David\n>\n\nThanks David for your time,  I will acknowledge every item you mentioned with the updated patch.  Now I just talk about part of them.\n1. There seem to be some cases where joins are no longer being\ndetected as unique. This is evident in postgres_fdw.out. We shouldn't\nbe regressing any of these cases.You are correct,  the issue here is  I didn't distinguish the one_row case in UniqueKey struct.  Actually when a outer relation is join with a relationwhich has only one row,  there must be at most one row match the outer join.The only-one-row case in postgres_fdw.out come from aggregation call. I will added one more field \"bool onerow\" in UniqueKey struct.  andalso try your optimization suggestion for the onerow UniqueKey. \n2. The following change does not seem like it should be part of this\npatch.  I understand you perhaps have done as you think it will\nimprove the performance of checking if an expression is in a list of\nexpressions.\n\n- COMPARE_SCALAR_FIELD(varno);\n+ /* Compare varattno first since it has higher selectivity than varno */\n  COMPARE_SCALAR_FIELD(varattno);\n+ COMPARE_SCALAR_FIELD(varno);\nI did have a hesitation when I make this changes.  so I'd rollback this changeat the following patch.  \nIf you think that is true, then please do it as a separate effort and\nprovide benchmarks with your findings.\n\n3. list_all_members_in. I think this would be better named as\nlist_is_subset. Please follow the lead of bms_is_subset().\nAdditionally, you should Assert that IsPointerList is true as there's\nnothing else to indicate that it can't be used for an int or Oid list.\n\n4. guarantee is not a very good name for the field in UniqueKey.\nMaybe something like is_not_null?\n \n5. I think you should be performing a bms_del_member during join\nremoval rather than removing this Assert()\n\n- Assert(bms_equal(rel->relids, root->all_baserels));\n\nFWIW, it's far from perfect that you've needed to delay the left join\nremoval, but I do understand why you've done it. It's also far from\nperfect that you're including removed relations in the\ntotal_table_pages calculation. c6e4133fae1 took some measures to\nimprove this calculation and this is making it worse again.\nSince the removed relation depends on the UniqueKey which has to becalculated after  total_table_pages calculation in current code, so that's something I must do.  But if the relation is not removable,  there is no wasteat all.  If it is removable,  such gain will much higher than the loss.  I'm not sure this should be a concern.   Actually looks the current remove_useless_join has some limits which can'tremove a joinrel,  I still didn't figure out why.  In the past we have some limitedability to detect the unqiueness after join, so that's would be ok.  Since  we havesuch ability now,  this may be another opportunity to improve the join_is_removablefunction, but I'd not like put such thing in this patch. Since you said \"far from perfect\" twice for this point and I only get one reason (we may plan a node which we removed later),  do I miss the other one? \n6. Can you explain why you moved the populate_baserel_uniquekeys()\ncall out of set_plain_rel_size()?\nThis is to be consistent with populate_partitionedrel_uniquekeys, whichis set at set_append_rel_pathlist.    \n7. I don't think the removal of rel_supports_distinctness() is\nwarranted.  Is it not ok to check if the relation has any uniquekeys?I think this is a good suggestion.  I will follow that.  \n8. Your spelling of unique is incorrect in many places:\n\nsrc/backend/nodes/makefuncs.c: * makeUnqiueKey\nsrc/backend/optimizer/path/uniquekeys.c:static List\n*initililze_unqiuecontext_for_joinrel(RelOptInfo *joinrel,\nsrc/backend/optimizer/path/uniquekeys.c: * check if combination of\nunqiuekeys from both side is still useful for us,\nsrc/backend/optimizer/path/uniquekeys.c:        outerrel_uniquekey_ctx\n= initililze_unqiuecontext_for_joinrel(joinrel, outerrel);\nsrc/backend/optimizer/path/uniquekeys.c:        innerrel_uniquekey_ctx\n= initililze_unqiuecontext_for_joinrel(joinrel, innerrel);\nsrc/backend/optimizer/path/uniquekeys.c: * we need to convert the\nUnqiueKey from sub_final_rel to currel via the positions info in\nsrc/backend/optimizer/path/uniquekeys.c:                ctx->pos =\npos; /* the position in current targetlist,  will be used to set\nUnqiueKey */\nsrc/backend/optimizer/path/uniquekeys.c: * Check if Unqiue key of the\ninnerrel is valid after join. innerrel's UniqueKey\nsrc/backend/optimizer/path/uniquekeys.c: * initililze_unqiuecontext_for_joinrel\nsrc/backend/optimizer/path/uniquekeys.c: * all the unqiuekeys which\nare not possible to use later\nsrc/backend/optimizer/path/uniquekeys.c:initililze_unqiuecontext_for_joinrel(RelOptInfo\n*joinrel,  RelOptInfo *inputrel)\nsrc/backend/optimizer/plan/analyzejoins.c:                      /*\nThis UnqiueKey is what we want */\nsrc/backend/optimizer/plan/planner.c:   /* If we the result if unqiue\nalready, we just return the input_rel directly */\nsrc/include/nodes/pathnodes.h: * exprs is a list of exprs which is\nunqiue on current RelOptInfo.\nsrc/test/regress/expected/join.out:-- XXXX: since b.id is unqiue now\nso the group by cluase is erased, so\nsrc/test/regress/expected/select_distinct.out:-- create unqiue index on dist_p\nsrc/test/regress/expected/select_distinct.out:-- we also support\ncreate unqiue index on each child tables\nsrc/test/regress/sql/join.sql:-- XXXX: since b.id is unqiue now so the\ngroup by cluase is erased, so\nsrc/test/regress/sql/select_distinct.sql:-- create unqiue index on dist_p\nsrc/test/regress/sql/select_distinct.sql:-- we also support create\nunqiue index on each child tables\n\n9. A few things wrong with the following fragment:\n\n/* set the not null info now */\nListCell *lc;\nforeach(lc, find_nonnullable_vars(qual))\n{\nVar *var = lfirst_node(Var, lc);\nRelOptInfo *rel = root->simple_rel_array[var->varno];\nif (var->varattno > InvalidAttrNumber)\nrel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);\n}\n\na. including a function call in the foreach macro is not a practise\nthat we really follow. It's true that the macro now assigns the 2nd\nparam to a variable. Previous to 1cff1b95ab6 this was not the case and\nit's likely best not to leave any bad examples around that code which\nmight get backported might follow.\nb. We generally subtract InvalidAttrNumber from varattno when\nincluding in a Bitmapset.\nc. not_null_cols is not well named. I think notnullattrs\nd. not_null_cols should not be a Relids type, it should be Bitmapset.\nIf it is a Bitmapset,  we have to pass it with \"&\" usually.  is it our practice?  \n10. add_uniquekey_for_onerow() seems pretty wasteful.  Is there really\na need to add each item in the rel's targetlist to the uniquekey list?\nWhat if we just add an empty list to the unique keys, that way if we\nneed to test if some expr is a superset of any uniquekey, then we'll\nsee it is as any set is a superset of an empty set.  Likely the empty\nset of uniquekeys should be the only one in the rel's uniquekey list.\n\n11. In create_distinct_paths() the code is now calling\nget_sortgrouplist_exprs() multiple times with the same input. I think\nit would be better to just call it once and set the result in a local\nvariable.\n\n12. The comment in the code below is not true. The List contains\nLists, of which contain UniqueKeys\n\nList    *uniquekeys; /* List of UniqueKey */\nIt is a list of UniqueKey,  the UniqueKey can have a list of exprs.  \n13. I'm having trouble parsing the final sentence in:\n\n+ * can only guarantee the uniqueness without considering the null values. This\n+ * field is necessary for remove_useless_join & reduce_unique_semijions since\n+ * these cases don't care about the null values.\n\nWhy is the field which stores the nullability of the key required for\ncode that does not care about the nullability of the key? The guarantee is introduced to for the following cases:create table t1 (a int primary key, b int);create table t2 (a int primary key, b int);select .. from t1,  (select b from t2 group by t2) t2 ..;-- b is nullable.  so t2(b) can't be a normal UniqueKey (which means b may have some duplicated rows)create unique index t2_uk_b on t2(b);  -- the left join still can be removed since t2.b is a unique index and the nullable doesn't matter here.select t1.* from t1 left join t2 on (t1.b = t2.b);  do you think we have can do some optimization in this case? I don't understand your question well.  \n15. The tests you've changed the expected outcome of in join.out\nshould be updated so that the GROUP BY and DISTINCT clause is not\nremoved. This will allow the test to continue testing what it was\nintended to test. You can do this by changing the columns in the GROUP\nBY clause so that the new code does not find uniquekeys for those\ncolumns.   Thanks for your explanation, very impressive! \n16. The tests in aggregates.out are in a similar situation. There are\nvarious tests trying to ensure that remove_useless_groupby_columns()\ndoes what it's meant to do. You can modify these tests to add a join\nwhich is non-unique to effectively duplicate the PK column.\n\n17. In your select_distinct tests, can you move away from naming the\ntables starting with select_distinct?  It makes reading queries pretty\nhard.\n\ne.g. explain (costs off) select distinct uk1, uk2 from\nselect_distinct_a where uk2 is not null;\n\nWhen I first glanced that, I failed to see the underscores and the\nquery looked invalid.\n\n18. Check the spelling if \"erased\". You have it spelt as \"ereased\" in\na couple of locations.OK,  I just installed a spell check plugin for my editor, hope it will catch sucherrors next time.   \n19. Please pay attention to the capitalisation of SQL keywords in the\ntest files you've modified. I understand we're very inconsistent in\nthis department in general, but we do at least try not to mix\ncapitalisation within the same file.  Basically, please upper case the\nkeywords in select_distinct.sql\n\n20. In addition to the above, please try to wrap long SQL lines so\nthey're below 80 chars.\n\nI'll review the patch in more detail once the above points have been addressed.\n\nDavid", "msg_date": "Wed, 1 Apr 2020 08:43:40 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, 1 Apr 2020 at 13:45, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> 5. I think you should be performing a bms_del_member during join\n>> removal rather than removing this Assert()\n>>\n>> - Assert(bms_equal(rel->relids, root->all_baserels));\n>>\n>> FWIW, it's far from perfect that you've needed to delay the left join\n>> removal, but I do understand why you've done it. It's also far from\n>> perfect that you're including removed relations in the\n>> total_table_pages calculation. c6e4133fae1 took some measures to\n>> improve this calculation and this is making it worse again.\n>>\n> Since the removed relation depends on the UniqueKey which has to be\n> calculated after total_table_pages calculation in current code, so that's\n> something I must do. But if the relation is not removable, there is no waste\n> at all. If it is removable, such gain will much higher than the loss. I'm\n> not sure this should be a concern.\n\nThe reason join removals was done so early in planning before was to\nsave the planner from having to do additional work for relations which\nwere going to be removed later. For example, building path lists.\n\n> Actually looks the current remove_useless_join has some limits which can't\n> remove a joinrel, I still didn't figure out why. In the past we have some limited\n> ability to detect the unqiueness after join, so that's would be ok. Since we have\n> such ability now, this may be another opportunity to improve the join_is_removable\n> function, but I'd not like put such thing in this patch.\n\nYeah, there's certainly more left join shapes that we could remove.\ne.g when the left join relation is not a singleton rel. We shouldn't\ndo anything to purposefully block additional join removals as a result\nof adding UniqueKeys, but likely shouldn't go to any trouble to make\nadditional ones work. That can be done later.\n\n> Since you said \"far from perfect\" twice for this point and I only get one reason (we\n> may plan a node which we removed later), do I miss the other one?\n\na) additional planning work by not removing the join sooner. b) wrong\ntotal page calculation.\n\nIn theory b) could be fixed by subtracting the removed join rels pages\nafter we remove it, but unfortunately, there's no point since we've\nbuilt the paths by that time already and we really only use the value\nto determine how much IO is going to be random vs sequential, which is\ndetermined during set_base_rel_pathlists()\n\n>> d. not_null_cols should not be a Relids type, it should be Bitmapset.\n>>\n> If it is a Bitmapset, we have to pass it with \"&\" usually. is it our practice?\n\nWell, a Bitmapset pointer. Relids is saved for range table indexes.\nStoring anything else in there is likely to lead to confusion.\n\n>> 12. The comment in the code below is not true. The List contains\n>> Lists, of which contain UniqueKeys\n>>\n>> List *uniquekeys; /* List of UniqueKey */\n>>\n> It is a list of UniqueKey, the UniqueKey can have a list of exprs.\n\nHmm, so this is what I called a UniqueKeySet in the original patch.\nI'm a bit divided by that change. With PathKeys, technically you can\nmake use of a Path with a given set of PathKeys if you only require\nsome leading subset of those keys. That's not the case for\nUniqueKeys, it's all or nothing, so perhaps having the singular name\nis better than the plural name I gave it. However, I'm not certain.\n\n(Really PathKey does not seem like a great name in the first place\nsince it has nothing to do with keys)\n\n>> 13. I'm having trouble parsing the final sentence in:\n>>\n>> + * can only guarantee the uniqueness without considering the null values. This\n>> + * field is necessary for remove_useless_join & reduce_unique_semijions since\n>> + * these cases don't care about the null values.\n>>\n>> Why is the field which stores the nullability of the key required for\n>> code that does not care about the nullability of the key?\n>>\n> The guarantee is introduced to for the following cases:\n>\n> create table t1 (a int primary key, b int);\n> create table t2 (a int primary key, b int);\n> select .. from t1, (select b from t2 group by t2) t2 ..;\n>\n> -- b is nullable. so t2(b) can't be a normal UniqueKey (which means b may have some\n> duplicated rows)\n> create unique index t2_uk_b on t2(b);\n>\n> -- the left join still can be removed since t2.b is a unique index and the nullable\n> doesn't matter here.\n> select t1.* from t1 left join t2 on (t1.b = t2.b);\n>\n> do you think we have can do some optimization in this case? I don't understand\n> your question well.\n\nOK, so by \"don't care\", you mean, don't duplicate NULL values. I\nassumed you had meant that it does not matter either way, as in: don't\nmind if there are NULL values or not. It might be best to have a go at\nchanging the wording to be more explicit to what you mean there.\n\n\n", "msg_date": "Wed, 1 Apr 2020 16:38:42 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "The updated patch should fixed all the issues. See the comments below for\nmore\ninformation.\n\nOn Tue, Mar 31, 2020 at 9:44 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sun, 29 Mar 2020 at 20:50, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > Some other changes made in the new patch:\n> > 1. Fixed bug for UniqueKey calculation for OUTER join.\n> > 2. Fixed some typo error in comments.\n> > 3. Renamed the field \"grantee\" as \"guarantee\".\n>\n> I've had a look over this patch. Thank for you doing further work on it.\n>\n> I've noted down the following during my read of the code:\n>\n> 1. There seem to be some cases where joins are no longer being\n> detected as unique. This is evident in postgres_fdw.out. We shouldn't\n> be regressing any of these cases.\n>\n\nThe issue here is I didn't distinguish the one_row case in UniqueKey\nstruct.\n Actually when a outer relation is join with a relation which has only one\nrow,\nthere must be at most one row match the outer join.The only-one-row case in\npostgres_fdw.out come from aggregation call.\n\nAdded a new field \"onerow\" in UniqueKey struct. and optimize the onerow\nUniqueKey to not record every exprs. See add_uniquekey_for_onerow\nand relation_is_onerow.\n\n\n> 2. The following change does not seem like it should be part of this\n> patch. I understand you perhaps have done as you think it will\n> improve the performance of checking if an expression is in a list of\n> expressions.\n>\n> - COMPARE_SCALAR_FIELD(varno);\n> + /* Compare varattno first since it has higher selectivity than varno */\n> COMPARE_SCALAR_FIELD(varattno);\n> + COMPARE_SCALAR_FIELD(varno);\n>\n> If you think that is true, then please do it as a separate effort and\n> provide benchmarks with your findings.\n>\n> Rollbacked.\n\n\n> 3. list_all_members_in. I think this would be better named as\n> list_is_subset. Please follow the lead of bms_is_subset().\n> Additionally, you should Assert that IsPointerList is true as there's\n> nothing else to indicate that it can't be used for an int or Oid list.\n>\n> Done\n\n\n> 4. guarantee is not a very good name for the field in UniqueKey.\n> Maybe something like is_not_null?\n>\n> I tried is_not_null, but when it is is_not_null equals false, it is a\ndouble\nnegation, and not feel good for me. At last, I used multi_nullvals to show\nthe UniqueKey may yield multi null values so the uniqueness is not\nguaranteed.\n\n\n> 5. I think you should be performing a bms_del_member during join\n> removal rather than removing this Assert()\n>\n> - Assert(bms_equal(rel->relids, root->all_baserels));\n\n\nDone\n\n>\n> FWIW, it's far from perfect that you've needed to delay the left join\n> removal, but I do understand why you've done it. It's also far from\n> perfect that you're including removed relations in the\n> total_table_pages calculation. c6e4133fae1 took some measures to\n> improve this calculation and this is making it worse again.\n>\n> 6. Can you explain why you moved the populate_baserel_uniquekeys()\n> call out of set_plain_rel_size()?\n>\n> 7. I don't think the removal of rel_supports_distinctness() is\n> warranted. Is it not ok to check if the relation has any uniquekeys?\n> It's possible, particularly in join_is_removable that this can save\n> quite a large amount of effort.\n>\n> Done\n\n\n> 8. Your spelling of unique is incorrect in many places:\n>\n> src/backend/nodes/makefuncs.c: * makeUnqiueKey\n> src/backend/optimizer/path/uniquekeys.c:static List\n> *initililze_unqiuecontext_for_joinrel(RelOptInfo *joinrel,\n> src/backend/optimizer/path/uniquekeys.c: * check if combination of\n> unqiuekeys from both side is still useful for us,\n> src/backend/optimizer/path/uniquekeys.c: outerrel_uniquekey_ctx\n> = initililze_unqiuecontext_for_joinrel(joinrel, outerrel);\n> src/backend/optimizer/path/uniquekeys.c: innerrel_uniquekey_ctx\n> = initililze_unqiuecontext_for_joinrel(joinrel, innerrel);\n> src/backend/optimizer/path/uniquekeys.c: * we need to convert the\n> UnqiueKey from sub_final_rel to currel via the positions info in\n> src/backend/optimizer/path/uniquekeys.c: ctx->pos =\n> pos; /* the position in current targetlist, will be used to set\n> UnqiueKey */\n> src/backend/optimizer/path/uniquekeys.c: * Check if Unqiue key of the\n> innerrel is valid after join. innerrel's UniqueKey\n> src/backend/optimizer/path/uniquekeys.c: *\n> initililze_unqiuecontext_for_joinrel\n> src/backend/optimizer/path/uniquekeys.c: * all the unqiuekeys which\n> are not possible to use later\n>\n> src/backend/optimizer/path/uniquekeys.c:initililze_unqiuecontext_for_joinrel(RelOptInfo\n> *joinrel, RelOptInfo *inputrel)\n> src/backend/optimizer/plan/analyzejoins.c: /*\n> This UnqiueKey is what we want */\n> src/backend/optimizer/plan/planner.c: /* If we the result if unqiue\n> already, we just return the input_rel directly */\n> src/include/nodes/pathnodes.h: * exprs is a list of exprs which is\n> unqiue on current RelOptInfo.\n> src/test/regress/expected/join.out:-- XXXX: since b.id is unqiue now\n> so the group by cluase is erased, so\n> src/test/regress/expected/select_distinct.out:-- create unqiue index on\n> dist_p\n> src/test/regress/expected/select_distinct.out:-- we also support\n> create unqiue index on each child tables\n> src/test/regress/sql/join.sql:-- XXXX: since b.id is unqiue now so the\n> group by cluase is erased, so\n> src/test/regress/sql/select_distinct.sql:-- create unqiue index on dist_p\n> src/test/regress/sql/select_distinct.sql:-- we also support create\n> unqiue index on each child tables\n>\n9. A few things wrong with the following fragment:\n>\n> /* set the not null info now */\n> ListCell *lc;\n> foreach(lc, find_nonnullable_vars(qual))\n> {\n> Var *var = lfirst_node(Var, lc);\n> RelOptInfo *rel = root->simple_rel_array[var->varno];\n> if (var->varattno > InvalidAttrNumber)\n> rel->not_null_cols = bms_add_member(rel->not_null_cols, var->varattno);\n> }\n>\n> a. including a function call in the foreach macro is not a practise\n> that we really follow. It's true that the macro now assigns the 2nd\n> param to a variable. Previous to 1cff1b95ab6 this was not the case and\n> it's likely best not to leave any bad examples around that code which\n> might get backported might follow.\n> b. We generally subtract InvalidAttrNumber from varattno when\n> including in a Bitmapset.\n> c. not_null_cols is not well named. I think notnullattrs\n> d. not_null_cols should not be a Relids type, it should be Bitmapset.\n>\n> Above 2 Done\n\n\n> 10. add_uniquekey_for_onerow() seems pretty wasteful. Is there really\n> a need to add each item in the rel's targetlist to the uniquekey list?\n> What if we just add an empty list to the unique keys, that way if we\n> need to test if some expr is a superset of any uniquekey, then we'll\n> see it is as any set is a superset of an empty set. Likely the empty\n> set of uniquekeys should be the only one in the rel's uniquekey list.\n>\n>\nNow I use a single UniqueKey to show this situation. See\nadd_uniquekey_for_onerow and relation_is_onerow.\n\n\n> 11. In create_distinct_paths() the code is now calling\n> get_sortgrouplist_exprs() multiple times with the same input. I think\n> it would be better to just call it once and set the result in a local\n> variable.\n>\n> Done\n\n\n> 12. The comment in the code below is not true. The List contains\n> Lists, of which contain UniqueKeys\n>\n> List *uniquekeys; /* List of UniqueKey */\n>\n> 13. I'm having trouble parsing the final sentence in:\n>\n> + * can only guarantee the uniqueness without considering the null values.\n> This\n> + * field is necessary for remove_useless_join & reduce_unique_semijions\n> since\n> + * these cases don't care about the null values.\n>\n> Why is the field which stores the nullability of the key required for\n> code that does not care about the nullability of the key?\n>\n> Also please check your spelling of the word \"join\"\n>\n>\nActually I didn't find the spell error for \"join\"..\n\n\n> 14. In the following fragment, instead of using i+1, please assign the\n> FormData_pg_attribute to a variable named attr and use attr->attnum.\n> Also, please see what I mentioned above about subtracting\n> InvalidAttrNumber\n>\n> + rel->not_null_cols = bms_add_member(rel->not_null_cols, i+1);\n>\n> Done\n\n\n> 15. The tests you've changed the expected outcome of in join.out\n> should be updated so that the GROUP BY and DISTINCT clause is not\n> removed. This will allow the test to continue testing what it was\n> intended to test. You can do this by changing the columns in the GROUP\n> BY clause so that the new code does not find uniquekeys for those\n> columns.\n>\n> Done\n\n\n> 16. The tests in aggregates.out are in a similar situation. There are\n> various tests trying to ensure that remove_useless_groupby_columns()\n> does what it's meant to do. You can modify these tests to add a join\n> which is non-unique to effectively duplicate the PK column.\n>\n>\nThere are some exceptions at this part.\n1. The test for remove_useless_groupby_columns has some overlap\nwith our current erasing group node logic, like the test for a single\nrelation.\nso I just modified 2 tests case for this purpose.\n2. When I read the code in remove_useless_groupby_columns, I found a\nnew case for our UniqueKey.\nselect * from m1 where a > (select avg(b) from m2 group by *M1.A*);\nwhere the m1.a will have var->varlevelsup > 0, how should we set the\nUniqueKey\nfor this grouprel . I add some in-completed check at\nadd_uniquekey_from_sortgroups\nfunction. but I'm not sure if we need that.\n3. remove_useless_groupby_columns maintains the parse->constraintDeps\nwhen it depends on primary key, but UniqueKey doesn't maintain such data.\nsince we have translation layer which should protect us from the\nconcurrency issue\nand isolation issue. Do we need to do that as well in UniqueKey?\n\n\n> 17. In your select_distinct tests, can you move away from naming the\n> tables starting with select_distinct? It makes reading queries pretty\n> hard.\n>\n> e.g. explain (costs off) select distinct uk1, uk2 from\n> select_distinct_a where uk2 is not null;\n>\n> When I first glanced that, I failed to see the underscores and the\n> query looked invalid.\n\n\n>\n18. Check the spelling if \"erased\". You have it spelt as \"ereased\" in\n> a couple of locations.\n>\n> 19. Please pay attention to the capitalisation of SQL keywords in the\n> test files you've modified. I understand we're very inconsistent in\n> this department in general, but we do at least try not to mix\n> capitalisation within the same file. Basically, please upper case the\n> keywords in select_distinct.sql\n>\n> 20. In addition to the above, please try to wrap long SQL lines so\n> they're below 80 chars.\n>\n> All above 4 item Done.\n\n\n> I'll review the patch in more detail once the above points have been\n> addressed.\n\n\n>\nDavid\n>", "msg_date": "Fri, 3 Apr 2020 10:17:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 3 Apr 2020 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> The updated patch should fixed all the issues. See the comments below for more\n> information.\n>\n> On Tue, Mar 31, 2020 at 9:44 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> + * can only guarantee the uniqueness without considering the null values. This\n>> + * field is necessary for remove_useless_join & reduce_unique_semijions since\n>> + * these cases don't care about the null values.\n>>\n>> Why is the field which stores the nullability of the key required for\n>> code that does not care about the nullability of the key?\n>>\n>> Also please check your spelling of the word \"join\"\n>>\n>\n> Actually I didn't find the spell error for \"join\"..\n\nIt was in reduce_unique_semijions. That should be\nreduce_unique_semijoins. I see you fixed it in the patch though.\n\n> 3. remove_useless_groupby_columns maintains the parse->constraintDeps\n> when it depends on primary key, but UniqueKey doesn't maintain such data.\n> since we have translation layer which should protect us from the concurrency issue\n> and isolation issue. Do we need to do that as well in UniqueKey?\n\nI'm pretty sure that code is pretty bogus in\nremove_useless_groupby_columns(). It perhaps was just copied from\ncheck_functional_grouping(), where it is required. Looks like the\n(ahem) author of d4c3a156c got that wrong... :-(\n\nThe reason check_functional_grouping() needs it is for things like\ncreating a view with a GROUP BY clause that has a column in the SELECT\nlist that is functionally dependant on the GROUP BY columns. e.g:\n\ncreate table z (a int primary key, b int);\ncreate view view_z as select a,b from z group by a;\nalter table z drop constraint z_pkey;\nERROR: cannot drop constraint z_pkey on table z because other objects\ndepend on it\nDETAIL: view view_z depends on constraint z_pkey on table z\nHINT: Use DROP ... CASCADE to drop the dependent objects too.\n\nHere that view would become invalid if the PK was dropped, so we must\nrecord the dependency in that case. Doing so is what causes that\nerror message.\n\nFor just planner smarts such as LEFT JOIN removal, Unique Joins, and\nall this Unique Key stuff, we really don't need to record the\ndependency as if the index or constraint is dropped, then that'll\ncause a relcache invalidation and we'll see the invalidation when we\nattempt to execute the cached plan. That will cause the statement to\nbe re-planned and we'll not see the unique index when we do that.\n\nWe should probably just get rid of that code in\nremove_useless_groupby_columns() to stop people getting confused about\nthat.\n\n\n", "msg_date": "Fri, 3 Apr 2020 16:08:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "David Rowley <dgrowleyml@gmail.com> writes:\n> For just planner smarts such as LEFT JOIN removal, Unique Joins, and\n> all this Unique Key stuff, we really don't need to record the\n> dependency as if the index or constraint is dropped, then that'll\n> cause a relcache invalidation and we'll see the invalidation when we\n> attempt to execute the cached plan. That will cause the statement to\n> be re-planned and we'll not see the unique index when we do that.\n\nYou need to make sure that the thing you're concerned about will actually\ncause a relcache invalidation of a table in the query. But yeah, if it\nwill then there's not a need to have any other invalidation mechanism.\n\n(It occurs to me BTW that we've been overly conservative about using\nNOT NULL constraints in planning, because of failing to consider that.\nAddition or drop of NOT NULL has to cause a change in\npg_attribute.attnotnull, which will definitely cause a relcache inval\non its table, cf rules in CacheInvalidateHeapTuple(). So we *don't*\nneed to have a pg_constraint entry corresponding to the NOT NULL, as\nwe've mistakenly supposed in some past discussions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Apr 2020 23:40:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 3 Apr 2020 at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (It occurs to me BTW that we've been overly conservative about using\n> NOT NULL constraints in planning, because of failing to consider that.\n> Addition or drop of NOT NULL has to cause a change in\n> pg_attribute.attnotnull, which will definitely cause a relcache inval\n> on its table, cf rules in CacheInvalidateHeapTuple(). So we *don't*\n> need to have a pg_constraint entry corresponding to the NOT NULL, as\n> we've mistakenly supposed in some past discussions.)\n\nAgreed for remove_useless_groupby_columns(), but we'd need it if we\nwanted to detect functional dependencies in\ncheck_functional_grouping() using unique indexes.\n\n\n", "msg_date": "Fri, 3 Apr 2020 17:07:54 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, Apr 3, 2020 at 12:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 3 Apr 2020 at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > (It occurs to me BTW that we've been overly conservative about using\n> > NOT NULL constraints in planning, because of failing to consider that.\n> > Addition or drop of NOT NULL has to cause a change in\n> > pg_attribute.attnotnull, which will definitely cause a relcache inval\n> > on its table, cf rules in CacheInvalidateHeapTuple(). So we *don't*\n> > need to have a pg_constraint entry corresponding to the NOT NULL, as\n> > we've mistakenly supposed in some past discussions.)\n>\n> Agreed for remove_useless_groupby_columns(), but we'd need it if we\n> wanted to detect functional dependencies in\n> check_functional_grouping() using unique indexes.\n>\n\nThanks for the explanation. I will add the removal in the next version of\nthis\npatch.\n\nBest Regards\nAndy Fan\n\nOn Fri, Apr 3, 2020 at 12:08 PM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 3 Apr 2020 at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> (It occurs to me BTW that we've been overly conservative about using\n> NOT NULL constraints in planning, because of failing to consider that.\n> Addition or drop of NOT NULL has to cause a change in\n> pg_attribute.attnotnull, which will definitely cause a relcache inval\n> on its table, cf rules in CacheInvalidateHeapTuple().  So we *don't*\n> need to have a pg_constraint entry corresponding to the NOT NULL, as\n> we've mistakenly supposed in some past discussions.)\n\nAgreed for remove_useless_groupby_columns(), but we'd need it if we\nwanted to detect functional dependencies in\ncheck_functional_grouping() using unique indexes.Thanks for the explanation.  I will add the removal in the next version of thispatch.Best RegardsAndy Fan", "msg_date": "Fri, 3 Apr 2020 16:54:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 3 Apr 2020 at 21:56, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Fri, Apr 3, 2020 at 12:08 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Fri, 3 Apr 2020 at 16:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> > (It occurs to me BTW that we've been overly conservative about using\n>> > NOT NULL constraints in planning, because of failing to consider that.\n>> > Addition or drop of NOT NULL has to cause a change in\n>> > pg_attribute.attnotnull, which will definitely cause a relcache inval\n>> > on its table, cf rules in CacheInvalidateHeapTuple(). So we *don't*\n>> > need to have a pg_constraint entry corresponding to the NOT NULL, as\n>> > we've mistakenly supposed in some past discussions.)\n>>\n>> Agreed for remove_useless_groupby_columns(), but we'd need it if we\n>> wanted to detect functional dependencies in\n>> check_functional_grouping() using unique indexes.\n>\n>\n> Thanks for the explanation. I will add the removal in the next version of this\n> patch.\n\nThere's no need for this patch to touch\nremove_useless_groupby_columns(). Fixes for that should be considered\nindependently and *possibly* even backpatched.\n\n\n", "msg_date": "Sat, 4 Apr 2020 14:31:32 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 3 Apr 2020 at 15:18, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> All above 4 item Done.\n\nJust to explain my view on this going forward for PG14. I do plan to\ndo a more thorough review of this soon. I wasn't so keen on pursuing\nthis for PG13 as the skip scans patch [1] needs to use the same\ninfrastructure this patch has added and it does not, yet.\n\nThe infrastructure (knowing the unique properties of a RelOptInfo), as\nprovided by the patch Andy has been working on, which is based on my\nrough prototype version, I believe should be used for the skip scans\npatch as well. I understand that as skip scans currently stands,\nJasper has done quite a bit of work to add the UniqueKeys, however,\nthis was unfortunately based on some early description of UniqueKeys\nwhere I had thought that we could just store EquivalenceClasses. I no\nlonger think that's the case, and I believe the implementation that we\nrequire is something more along the lines of Andy's latest version of\nthe patch. However, I've not quite stared at it long enough to be\nhighly confident in that.\n\nI'd like to strike up a bit of a plan to move both Andy's work and the\nSkip scans work forward for PG14.\n\nHere are my thoughts so far:\n\n1. Revise v4 of remove DISTINCT patch to split the patch into two pieces.\n\n0001 should add the UniqueKey code but not any additional planner\nsmarts to use them (i.e remove GROUP BY / DISTINCT) elimination parts.\nJoin removals and Unique joins should use UniqueKeys in this patch.\n0002 should add back the GROUP BY / DISTINCT smarts and add whatever\ntests should be added for that and include updating existing expected\nresults and modifying any tests which no longer properly test what\nthey're meant to be testing.\n\nI've done this with the attached patch.\n\n2. David / Jesper to look at 0001 and build or align the existing skip\nscans 0001 patch to make use of Andy's 0001 patch. This will require\ntagging UniqueKeys onto Paths, not just RelOptInfos, plus a bunch of\nother work.\n\n\nClearly UniqueKeys must suit both needs and since we have two\ndifferent implementations each providing some subset of the features,\nthen clearly we're not yet ready to move both skip scans and this\npatch forward together. We need to align that and move both patches\nforward together. Hopefully, the attached 0001 patch helps move that\nalong.\n\n\n\nWhile I'm here, a quick review of Andy's v4 patch. I didn't address\nany of this in the attached v5. These are only based on what I saw\nwhen shuffling some code around. It's not an in-depth review.\n\n1. Out of date comment in join.sql\n\n-- join removal is not possible when the GROUP BY contains a column that is\n-- not in the join condition. (Note: as of 9.6, we notice that b.id is a\n-- primary key and so drop b.c_id from the GROUP BY of the resulting plan;\n-- but this happens too late for join removal in the outer plan level.)\nexplain (costs off)\nselect d.* from d left join (select d, c_id from b group by b.d, b.c_id) s\n on d.a = s.d;\n\nYou've changed the GROUP BY clause so it does not include b.id, so the\nNote in the comment is now misleading.\n\n2. I think 0002 is overly restrictive in its demands that\nparse->hasAggs must be false. We should be able to just use a Group\nAggregate with unsorted input when the input_rel is unique on the\nGROUP BY clause. This will save on hashing and sorting. Basically\nsimilar to what we do for when a query contains aggregates without any\nGROUP BY.\n\n3. I don't quite understand why you changed this to a right join:\n\n -- Test case where t1 can be optimized but not t2\n explain (costs off) select t1.*,t2.x,t2.z\n-from t1 inner join t2 on t1.a = t2.x and t1.b = t2.y\n+from t1 right join t2 on t1.a = t2.x and t1.b = t2.y\n\nPerhaps this change is left over from some previous version of the patch?\n\n[1] https://commitfest.postgresql.org/27/1741/", "msg_date": "Tue, 14 Apr 2020 21:09:31 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi David:\n\nThanks for your time.\n\n\n> 1. Out of date comment in join.sql\n>\n> -- join removal is not possible when the GROUP BY contains a column that is\n> -- not in the join condition. (Note: as of 9.6, we notice that b.id is a\n> -- primary key and so drop b.c_id from the GROUP BY of the resulting plan;\n> -- but this happens too late for join removal in the outer plan level.)\n> explain (costs off)\n> select d.* from d left join (select d, c_id from b group by b.d, b.c_id) s\n> on d.a = s.d;\n>\n> You've changed the GROUP BY clause so it does not include b.id, so the\n> Note in the comment is now misleading.\n>\n\nThanks, I will fix this one in the following patch.\n\n\n>\n> 2. I think 0002 is overly restrictive in its demands that\n> parse->hasAggs must be false. We should be able to just use a Group\n> Aggregate with unsorted input when the input_rel is unique on the\n> GROUP BY clause. This will save on hashing and sorting. Basically\n> similar to what we do for when a query contains aggregates without any\n> GROUP BY.\n>\n>\nYes, This will be a perfect result, the difficult is the current\naggregation function\nexecution is highly coupled with Agg node(ExecInitAgg) which is removed in\nthe\nunique case. I ever make the sum (w/o finalfn) and avg(with finalfn)\nworks in a hack way, but still many stuffs is not handled. Let me prepare\nthe code\nfor this purpose in 1~2 days to see if I'm going with the right direction.\n\nAshutosh also has an idea[1] that if the relation underlying an Agg node is\nknown to be unique for given groupByClause, we could safely use\nAGG_SORTED strategy. Though the input is not ordered, it's sorted thus for\nevery row Agg\nnode will combine/finalize the aggregate result.\n\nI will target the perfect result first and see how many effort do we need,\nif not,\nI will try Ashutosh's suggestion.\n\n\n\n> 3. I don't quite understand why you changed this to a right join:\n>\n> -- Test case where t1 can be optimized but not t2\n> explain (costs off) select t1.*,t2.x,t2.z\n> -from t1 inner join t2 on t1.a = t2.x and t1.b = t2.y\n> +from t1 right join t2 on t1.a = t2.x and t1.b = t2.y\n>\n> Perhaps this change is left over from some previous version of the patch?\n>\n\nThis is on purpose. the original test case is used to test we can short\nthe group key for t1 but not t2 for aggregation, but if I keep the inner\njoin, the\naggnode will be removed totally, so I have to change it to right join in\norder\nto keep the aggnode. The full test case is:\n\n-- Test case where t1 can be optimized but not t2\n\nexplain (costs off) select t1.*,t2.x,t2.z\n\nfrom t1 inner join t2 on t1.a = t2.x and t1.b = t2.y\n\ngroup by t1.a,t1.b,t1.c,t1.d,t2.x,t2.z;\n\n\nwhere (a, b) is the primary key of t1.\n\n\n\n[1]\nhttps://www.postgresql.org/message-id/CAExHW5sY%2BL6iZ%3DrwnL7n3jET7aNLCNQimvfcS7C%2B5wmdjmdPiw%40mail.gmail.com\n\nHi David:Thanks for your time.  \n1. Out of date comment in join.sql\n\n-- join removal is not possible when the GROUP BY contains a column that is\n-- not in the join condition.  (Note: as of 9.6, we notice that b.id is a\n-- primary key and so drop b.c_id from the GROUP BY of the resulting plan;\n-- but this happens too late for join removal in the outer plan level.)\nexplain (costs off)\nselect d.* from d left join (select d, c_id from b group by b.d, b.c_id) s\n  on d.a = s.d;\n\nYou've changed the GROUP BY clause so it does not include b.id, so the\nNote in the comment is now misleading.Thanks, I will fix this one in the following patch.  \n\n2. I think 0002 is overly restrictive in its demands that\nparse->hasAggs must be false. We should be able to just use a Group\nAggregate with unsorted input when the input_rel is unique on the\nGROUP BY clause.  This will save on hashing and sorting.  Basically\nsimilar to what we do for when a query contains aggregates without any\nGROUP BY.\nYes,  This will be a perfect result,  the difficult is the current aggregation functionexecution is highly coupled with Agg node(ExecInitAgg) which is removed in theunique case.  I ever make the sum (w/o finalfn) and avg(with finalfn)works in a hack way, but still many stuffs is not handled.  Let me prepare the codefor this purpose in 1~2  days to see if I'm going with the right direction. Ashutosh also has an idea[1] that if the relation underlying an Agg node is known to be unique for given groupByClause, we could safely use AGG_SORTED strategy. Though the input is not ordered, it's sorted thus for every row Aggnode will combine/finalize the aggregate result.  I will target the perfect result first and see how many effort do we need, if not,I will try Ashutosh's suggestion.  \n3. I don't quite understand why you changed this to a right join:\n\n -- Test case where t1 can be optimized but not t2\n explain (costs off) select t1.*,t2.x,t2.z\n-from t1 inner join t2 on t1.a = t2.x and t1.b = t2.y\n+from t1 right join t2 on t1.a = t2.x and t1.b = t2.y\n\nPerhaps this change is left over from some previous version of the patch?This is on purpose.   the original test case is used to test we can shortthe group key for t1 but not t2 for aggregation, but if I keep the inner join, the aggnode will be removed totally, so I have to change it to right join in orderto keep the aggnode.  The full test case is:\n-- Test case where t1 can be optimized but not t2\nexplain (costs off) select t1.*,t2.x,t2.z\nfrom t1 inner join t2 on t1.a = t2.x and t1.b = t2.y\ngroup by t1.a,t1.b,t1.c,t1.d,t2.x,t2.z;where (a, b) is the primary key of t1. [1] https://www.postgresql.org/message-id/CAExHW5sY%2BL6iZ%3DrwnL7n3jET7aNLCNQimvfcS7C%2B5wmdjmdPiw%40mail.gmail.com", "msg_date": "Wed, 15 Apr 2020 08:18:48 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, 15 Apr 2020 at 12:19, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>> 2. I think 0002 is overly restrictive in its demands that\n>> parse->hasAggs must be false. We should be able to just use a Group\n>> Aggregate with unsorted input when the input_rel is unique on the\n>> GROUP BY clause. This will save on hashing and sorting. Basically\n>> similar to what we do for when a query contains aggregates without any\n>> GROUP BY.\n>>\n>\n> Yes, This will be a perfect result, the difficult is the current aggregation function\n> execution is highly coupled with Agg node(ExecInitAgg) which is removed in the\n> unique case.\n\nThis case here would be slightly different. It would be handled by\nstill creating a Group Aggregate path, but just not consider Hash\nAggregate and not Sort the input to the Group Aggregate path. Perhaps\nthat's best done by creating a new flag bit and using it in\ncreate_grouping_paths() in the location where we set the flags\nvariable. If you determine that the input_rel is unique for the GROUP\nBY clause, and that there are aggregate functions, then set a flag,\ne.g GROUPING_INPUT_UNIQUE. Likely there will be a few other flags that\nyou can skip setting in that function, for example, there's no need to\ncheck if the input can sort, so no need for GROUPING_CAN_USE_SORT,\nsince you won't need to sort, likewise for GROUPING_CAN_USE_HASH. I'd\nsay there also is no need for checking if we can set\nGROUPING_CAN_PARTIAL_AGG (What would be the point in doing partial\naggregation when there's 1 row per group?) Then down in\nadd_paths_to_grouping_rel(), just add a special case before doing any\nother code, such as:\n\nif ((extra->flags & GROUPING_INPUT_UNIQUE) != 0 && parse->groupClause != NIL)\n{\nPath *path = input_rel->cheapest_total_path;\n\nadd_path(grouped_rel, (Path *)\ncreate_agg_path(root,\ngrouped_rel,\npath,\ngrouped_rel->reltarget,\nAGG_SORTED,\nAGGSPLIT_SIMPLE,\nparse->groupClause,\nhavingQual,\nagg_costs,\ndNumGroups));\nreturn;\n}\n\nYou may also want to consider the cheapest startup path there too so\nthat the LIMIT processing can do something smarter later in planning\n(assuming cheapest_total_path != cheapest_startup_path (which you'd\nneed to check for)).\n\nPerhaps it would be better to only set the GROUPING_INPUT_UNIQUE if\nthere is a groupClause, then just Assert(parse->groupClause != NIL)\ninside that if.\n\nDavid\n\n\n", "msg_date": "Wed, 15 Apr 2020 15:00:40 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, Apr 15, 2020 at 11:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 15 Apr 2020 at 12:19, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >> 2. I think 0002 is overly restrictive in its demands that\n> >> parse->hasAggs must be false. We should be able to just use a Group\n> >> Aggregate with unsorted input when the input_rel is unique on the\n> >> GROUP BY clause. This will save on hashing and sorting. Basically\n> >> similar to what we do for when a query contains aggregates without any\n> >> GROUP BY.\n> >>\n> >\n> > Yes, This will be a perfect result, the difficult is the current\n> aggregation function\n> > execution is highly coupled with Agg node(ExecInitAgg) which is removed\n> in the\n> > unique case.\n>\n> This case here would be slightly different. It would be handled by\n> still creating a Group Aggregate path, but just not consider Hash\n> Aggregate and not Sort the input to the Group Aggregate path. Perhaps\n> that's best done by creating a new flag bit and using it in\n> create_grouping_paths() in the location where we set the flags\n> variable. If you determine that the input_rel is unique for the GROUP\n> BY clause, and that there are aggregate functions, then set a flag,\n> e.g GROUPING_INPUT_UNIQUE. Likely there will be a few other flags that\n> you can skip setting in that function, for example, there's no need to\n> check if the input can sort, so no need for GROUPING_CAN_USE_SORT,\n> since you won't need to sort, likewise for GROUPING_CAN_USE_HASH. I'd\n> say there also is no need for checking if we can set\n> GROUPING_CAN_PARTIAL_AGG (What would be the point in doing partial\n> aggregation when there's 1 row per group?) Then down in\n> add_paths_to_grouping_rel(), just add a special case before doing any\n> other code, such as:\n>\n> if ((extra->flags & GROUPING_INPUT_UNIQUE) != 0 && parse->groupClause !=\n> NIL)\n> {\n> Path *path = input_rel->cheapest_total_path;\n>\n> add_path(grouped_rel, (Path *)\n> create_agg_path(root,\n> grouped_rel,\n> path,\n> grouped_rel->reltarget,\n> AGG_SORTED,\n> AGGSPLIT_SIMPLE,\n> parse->groupClause,\n> havingQual,\n> agg_costs,\n> dNumGroups));\n> return;\n> }\n>\n> You may also want to consider the cheapest startup path there too so\n> that the LIMIT processing can do something smarter later in planning\n> (assuming cheapest_total_path != cheapest_startup_path (which you'd\n> need to check for)).\n>\n> Perhaps it would be better to only set the GROUPING_INPUT_UNIQUE if\n> there is a groupClause, then just Assert(parse->groupClause != NIL)\n> inside that if.\n>\n>\nThank you for your detailed explanation. The attached v6 has included\nthis feature.\nHere is the the data to show the improvement.\n\nTest cases:\ncreate table grp2 (a int primary key, b char(200), c int);\ninsert into grp2 select i, 'x', i from generate_series(1, 10000000)i;\nanalyze grp2;\nexplain analyze select a, sum(c) from grp2 group by a;\n\nw/o this feature:\n\npostgres=# explain analyze select a, sum(c) from grp2 group by a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.43..712718.44 rows=10000000 width=12) (actual\ntime=0.088..15491.027 rows=10000000 loops=1)\n Group Key: a\n -> Index Scan using grp2_pkey on grp2 (cost=0.43..562718.44\nrows=10000000 width=8) (actual time=0.068..6503.459 rows=10000000 loops=1)\n Planning Time: 0.916 ms\n Execution Time: *16252.397* ms\n(5 rows)\n\nSince the order of my data in heap and index is exactly same, which makes\nthe index scan much faster. The following is to test the cost of the\n*hash* aggregation,\n\npostgres=# set enable_indexscan to off;\nSET\npostgres=# explain analyze select a, sum(c) from grp2 group by a;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=765531.00..943656.00 rows=10000000 width=12) (actual\ntime=14424.379..30133.171 rows=10000000 loops=1)\n Group Key: a\n Planned Partitions: 128\n Peak Memory Usage: 4153 kB\n Disk Usage: 2265608 kB\n HashAgg Batches: 640\n -> Seq Scan on grp2 (cost=0.00..403031.00 rows=10000000 width=8)\n(actual time=0.042..2808.281 rows=10000000 loops=1)\n Planning Time: 0.159 ms\n Execution Time: *31098.804* ms\n(9 rows)\n\nWith this feature:\nexplain analyze select a, sum(c) from grp2 group by a;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..553031.57 rows=10000023 width=12) (actual\ntime=0.044..13209.485 rows=10000000 loops=1)\n Group Key: a\n -> Seq Scan on grp2 (cost=0.00..403031.23 rows=10000023 width=8)\n(actual time=0.023..4938.171 rows=10000000 loops=1)\n Planning Time: 0.400 ms\n Execution Time: *13749.121* ms\n(5 rows)\n\nDuring the implementation, I also added AGG_UNIQUE AggStrategy to\nrecord this information in Agg Plan node, this is a simple way to do it and\nshould be semantic correct.\n\n -\n\nV6 also includes:\n1. Fix the comment misleading you mentioned above.\n2. Fixed a concern case for `relation_has_uniquekeys_for` function.\n\n+ /* For UniqueKey->onerow case, the uniquekey->exprs is empty as well\n+ * so we can't rely on list_is_subset to handle this special cases\n+ */\n+ if (exprs == NIL)\n+ return false;\n\n\nBest Regards\nAndy Fan", "msg_date": "Thu, 16 Apr 2020 10:17:00 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, Apr 16, 2020 at 7:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> (9 rows)\n>\n> With this feature:\n> explain analyze select a, sum(c) from grp2 group by a;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.00..553031.57 rows=10000023 width=12) (actual time=0.044..13209.485 rows=10000000 loops=1)\n> Group Key: a\n> -> Seq Scan on grp2 (cost=0.00..403031.23 rows=10000023 width=8) (actual time=0.023..4938.171 rows=10000000 loops=1)\n> Planning Time: 0.400 ms\n> Execution Time: 13749.121 ms\n> (5 rows)\n>\n\nApplying the patch gives a white space warning\ngit am /tmp/v6-000*\nApplying: Introduce UniqueKeys to determine RelOptInfo unique properties\n.git/rebase-apply/patch:545: indent with spaces.\n /* Fast path */\nwarning: 1 line adds whitespace errors.\nApplying: Skip DISTINCT / GROUP BY if input is already unique\n\nCompiling the patch causes one warning\nnodeAgg.c:2134:3: warning: enumeration value ‘AGG_UNIQUE’ not handled\nin switch [-Wswitch]\n\nI have not looked at the patch. The numbers above look good. The time\nspent in summing up a column in each row (we are summing only one\nnumber per group) is twice the time it took to read those rows from\nthe table. That looks odd. But it may not be something unrelated to\nyour patch. I also observed that for explain analyze select a from\ngrp2 group by a; we just produce a plan containing seq scan node,\nwhich is a good thing.\n\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 16 Apr 2020 18:05:57 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, Apr 16, 2020 at 8:36 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Thu, Apr 16, 2020 at 7:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> > (9 rows)\n> >\n> > With this feature:\n> > explain analyze select a, sum(c) from grp2 group by a;\n> > QUERY PLAN\n> >\n> --------------------------------------------------------------------------------------------------------------------------\n> > GroupAggregate (cost=0.00..553031.57 rows=10000023 width=12) (actual\n> time=0.044..13209.485 rows=10000000 loops=1)\n> > Group Key: a\n> > -> Seq Scan on grp2 (cost=0.00..403031.23 rows=10000023 width=8)\n> (actual time=0.023..4938.171 rows=10000000 loops=1)\n> > Planning Time: 0.400 ms\n> > Execution Time: 13749.121 ms\n> > (5 rows)\n> >\n>\n> Applying the patch gives a white space warning\n> git am /tmp/v6-000*\n> Applying: Introduce UniqueKeys to determine RelOptInfo unique properties\n> .git/rebase-apply/patch:545: indent with spaces.\n> /* Fast path */\n> warning: 1 line adds whitespace errors.\n> Applying: Skip DISTINCT / GROUP BY if input is already unique\n>\n> Compiling the patch causes one warning\n> nodeAgg.c:2134:3: warning: enumeration value ‘AGG_UNIQUE’ not handled\n> in switch [-Wswitch]\n>\n>\nThanks, I will fix them together with some detailed review suggestion.\n(I know the review need lots of time, so appreciated for it).\n\n\n> I have not looked at the patch. The numbers above look good. The time\n> spent in summing up a column in each row (we are summing only one\n> number per group) is twice the time it took to read those rows from\n> the table. That looks odd. But it may not be something unrelated to\n> your patch. I also observed that for explain analyze select a from\n> grp2 group by a; we just produce a plan containing seq scan node,\n> which is a good thing.\n>\n\nGreat and welcome back Ashutosh:)\n\n\n> --\n> Best Wishes,\n> Ashutosh Bapat\n>\n\nOn Thu, Apr 16, 2020 at 8:36 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Thu, Apr 16, 2020 at 7:47 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> (9 rows)\n>\n> With this feature:\n> explain analyze select a, sum(c) from grp2 group by a;\n>                                                         QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------\n>  GroupAggregate  (cost=0.00..553031.57 rows=10000023 width=12) (actual time=0.044..13209.485 rows=10000000 loops=1)\n>    Group Key: a\n>    ->  Seq Scan on grp2  (cost=0.00..403031.23 rows=10000023 width=8) (actual time=0.023..4938.171 rows=10000000 loops=1)\n>  Planning Time: 0.400 ms\n>  Execution Time: 13749.121 ms\n> (5 rows)\n>\n\nApplying the patch gives a white space warning\ngit am /tmp/v6-000*\nApplying: Introduce UniqueKeys to determine RelOptInfo unique properties\n.git/rebase-apply/patch:545: indent with spaces.\n    /* Fast path */\nwarning: 1 line adds whitespace errors.\nApplying: Skip DISTINCT / GROUP BY if input is already unique\n\nCompiling the patch causes one warning\nnodeAgg.c:2134:3: warning: enumeration value ‘AGG_UNIQUE’ not handled\nin switch [-Wswitch]\nThanks, I will fix them together with some detailed review suggestion.  (I know the review need lots of time, so appreciated for it).   \nI have not looked at the patch. The numbers above look good. The time\nspent in summing up a column in each row (we are summing only one\nnumber per group) is twice the time it took to read those rows from\nthe table. That looks odd. But it may not be something unrelated to\nyour patch. I also observed that for explain analyze select a from\ngrp2 group by a; we just produce a plan containing seq scan node,\nwhich is a good thing.Great and welcome back Ashutosh:)   \n--\nBest Wishes,\nAshutosh Bapat", "msg_date": "Fri, 17 Apr 2020 09:16:59 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, 16 Apr 2020 at 14:17, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> V6 also includes:\n> 1. Fix the comment misleading you mentioned above.\n> 2. Fixed a concern case for `relation_has_uniquekeys_for` function.\n\nOver on [1], Richard highlights a problem in the current join removals\nlack of ability to remove left joins unless the min_righthand side of\nthe join is a singleton rel. It's my understanding that the reason the\ncode checks for this is down to the fact that join removals used\nunique indexed to prove the uniqueness of the relation and obviously,\nthose can only exist on base relations. I wondered if you might want\nto look into a 0003 patch which removes that restriction? I think this\ncan be done now since we no longer look at unique indexes to provide\nthe proves that the join to be removed won't duplicate outer side\nrows.\n\nDavid\n\n[1] https://www.postgresql.org/message-id/CAMbWs4-THacv3DdMpiTrvg5ZY7sNViFF1pTU=kOKmtPBrE9-0Q@mail.gmail.com\n\n\n", "msg_date": "Wed, 29 Apr 2020 12:29:20 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:29 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 16 Apr 2020 at 14:17, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > V6 also includes:\n> > 1. Fix the comment misleading you mentioned above.\n> > 2. Fixed a concern case for `relation_has_uniquekeys_for` function.\n>\n> Over on [1], Richard highlights a problem in the current join removals\n> lack of ability to remove left joins unless the min_righthand side of\n> the join is a singleton rel. It's my understanding that the reason the\n> code checks for this is down to the fact that join removals used\n> unique indexed to prove the uniqueness of the relation and obviously,\n> those can only exist on base relations. I wondered if you might want\n> to look into a 0003 patch which removes that restriction? I think this\n> can be done now since we no longer look at unique indexes to provide\n> the proves that the join to be removed won't duplicate outer side\n> rows.\n>\n\nYes, I think that would be another benefit of UniqueKey, but it doesn't\nhappen\nuntil now. I will take a look of it today and fix it in a separated\ncommit.\n\nBest Regards\nAndy Fan\n\nOn Wed, Apr 29, 2020 at 8:29 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 16 Apr 2020 at 14:17, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> V6 also includes:\n> 1.  Fix the comment misleading you mentioned above.\n> 2.  Fixed a concern case for `relation_has_uniquekeys_for` function.\n\nOver on [1], Richard highlights a problem in the current join removals\nlack of ability to remove left joins unless the min_righthand side of\nthe join is a singleton rel. It's my understanding that the reason the\ncode checks for this is down to the fact that join removals used\nunique indexed to prove the uniqueness of the relation and obviously,\nthose can only exist on base relations.  I wondered if you might want\nto look into a 0003 patch which removes that restriction? I think this\ncan be done now since we no longer look at unique indexes to provide\nthe proves that the join to be removed won't duplicate outer side\nrows.Yes, I think that would be another benefit of UniqueKey,  but it doesn't happenuntil now.  I will take a look of it today and fix it in a separated commit.  Best RegardsAndy Fan", "msg_date": "Wed, 29 Apr 2020 08:34:59 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, Apr 29, 2020 at 8:34 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>\n> On Wed, Apr 29, 2020 at 8:29 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n>> On Thu, 16 Apr 2020 at 14:17, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> > V6 also includes:\n>> > 1. Fix the comment misleading you mentioned above.\n>> > 2. Fixed a concern case for `relation_has_uniquekeys_for` function.\n>>\n>> Over on [1], Richard highlights a problem in the current join removals\n>> lack of ability to remove left joins unless the min_righthand side of\n>> the join is a singleton rel. It's my understanding that the reason the\n>> code checks for this is down to the fact that join removals used\n>> unique indexed to prove the uniqueness of the relation and obviously,\n>> those can only exist on base relations. I wondered if you might want\n>> to look into a 0003 patch which removes that restriction? I think this\n>> can be done now since we no longer look at unique indexes to provide\n>> the proves that the join to be removed won't duplicate outer side\n>> rows.\n>>\n>\n> Yes, I think that would be another benefit of UniqueKey, but it doesn't\n> happen\n> until now. I will take a look of it today and fix it in a separated\n> commit.\n>\n>\nI have make it work locally, the basic idea is to postpone the join\nremoval at\nbuild_join_rel stage where the uniquekey info is well maintained. I will\ntest\nmore to send a product-ready-target patch tomorrow.\n\n# explain (costs off) select a.i from a left join b on a.i = b.i and\n b.j in (select j from c);\n QUERY PLAN\n---------------\n Seq Scan on a\n(1 row)\n\n Best Regard\nAndy Fan\n\nOn Wed, Apr 29, 2020 at 8:34 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:On Wed, Apr 29, 2020 at 8:29 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 16 Apr 2020 at 14:17, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> V6 also includes:\n> 1.  Fix the comment misleading you mentioned above.\n> 2.  Fixed a concern case for `relation_has_uniquekeys_for` function.\n\nOver on [1], Richard highlights a problem in the current join removals\nlack of ability to remove left joins unless the min_righthand side of\nthe join is a singleton rel. It's my understanding that the reason the\ncode checks for this is down to the fact that join removals used\nunique indexed to prove the uniqueness of the relation and obviously,\nthose can only exist on base relations.  I wondered if you might want\nto look into a 0003 patch which removes that restriction? I think this\ncan be done now since we no longer look at unique indexes to provide\nthe proves that the join to be removed won't duplicate outer side\nrows.Yes, I think that would be another benefit of UniqueKey,  but it doesn't happenuntil now.  I will take a look of it today and fix it in a separated commit.  I have make it work locally,  the basic idea is to postpone the join removal atbuild_join_rel stage where the uniquekey info is well maintained.   I will test more to send a product-ready-target patch tomorrow. # explain (costs off) select a.i from a left join b on a.i = b.i and    b.j in (select j from c);  QUERY PLAN--------------- Seq Scan on a(1 row) Best RegardAndy Fan", "msg_date": "Wed, 29 Apr 2020 21:27:54 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "I just uploaded the v7 version and split it into smaller commits for easier\nreview/merge. I also maintain a up-to-date README.uniquekey\ndocument since something may changed during discussion or later code.\n\nHere is the simple introduction of each commit.\n\n====\n1. v7-0001-Introduce-RelOptInfo-notnullattrs-attribute.patch\n\nThis commit adds the notnullattrs to RelOptInfo, which grabs the\ninformation\nfrom both catalog and user's query.\n\n\n2. v7-0002-Introuduce-RelOptInfo.uniquekeys-attribute.patch\n\nThis commit just add the uniquekeys to RelOptInfo and maintain it at every\nstage. However the upper level code is not changed due to this.\n\nSome changes of this part in v7:\n1). Removed the UniqueKey.positions attribute. In the past it is used in\n convert_subquery_uniquekeys, however we don't need it actually (And I\n maintained it wrong in the past). Now I build the relationship between\nthe\n outer var to subuqery's TargetList with outrel.subquery.processed_tlist.\n2). onerow UniqueKey(exprs = NIL) need to be converted to normal\nuniquekey(exprs\n != NIL) if it is not one-row any more. This may happen on some outer\njoin.\n\n\n3. v7-0003-Refactor-existing-uniqueness-related-code-to-use-.patch\n\nRefactor the existing functions like innerrel_is_unique/res_is_distinct_for\nto\nuse UniqueKey, and postpone the call of remove_useless_join and\nreduce_unique_semijoins to use the new implementation.\n\n4. v7-0004-Remove-distinct-node-AggNode-if-the-input-is-uniq.patch\n\nRemove the distinct node if the result is distinct already. Remove the\naggnode\nif the group by clause is unique already AND there is no aggregation\nfunction in\nquery.\n\n5. v7-0005-If-the-group-by-clause-is-unique-and-we-have-aggr.patch\n\nIf the group by clause is unique and query has aggregation function, we use\nthe AGG_SORT strategy but without really sort since it has only one row in\neach\ngroup.\n\n\n6. v7-0006-Join-removal-at-run-time-with-UniqueKey.patch\n\nThis commit run join removal at build_join_rel. At that time, it can fully\nuses\nunique key. It can handle some more cases, I added some new test cases to\njoin.sql. However it can be a replacement of the current one. There are some\ncases the new strategy can work run well but the current one can. Like\n\nSELECT a.* FROM a LEFT JOIN (b left join c on b.c_id = c.id) ON (a.b_id =\nb.id);\n\nduring the join a & b, the join can't be removed since b.id is still useful\nin\nfuture. However in the future, we know the b.id can be removed as well, but\nit is too late to remove the previous join.\n\nAt the implementation part, the main idea is if the join_canbe_removed. we\nwill copy the pathlist from outerrel to joinrel. There are several items\nneed to\nhandle.\n\n1. To make sure the overall join_search_one_level, we have to keep the\njoinrel\n even the innerrel is removed (rather than discard the joinrel).\n2. If the innerrel can be removed, we don't need to build pathlist for\njoinrel,\n we just reuse the pathlist from outerrel. However there are many places\nwhere\n use assert rel->pathlist[*]->parent == rel. so I copied the pathlist, we\n have to change the parent to joinrel.\n3. During create plan for some path on RTE_RELATION, it needs to know the\n relation Oid with path->parent->relid. so we have to use the\nouterrel->relid\n to overwrite the joinrel->relid which is 0 before.\n4. Almost same paths as item 3, it usually assert\nbest_path->parent->rtekind ==\n RTE_RELATION; now the path may appeared in joinrel, so I used\n outerrel->rtekind to overwrite joinrel->rtekind.\n5. I guess there are some dependencies between path->pathtarget and\n rel->reltarget. since we reuse the pathlist of outerrel, so I used the\n outer->reltarget as well. If the join can be removed, I guess the length\nof\n list_length(outrel->reltarget->exprs) >= (joinrel->reltarget->exprs). we\ncan\n rely on the ProjectionPath to reduce the tlist.\n\nMy patches is based on the current latest commit fb544735f1.\n\nBest Regards\nAndy Fan\n\n>", "msg_date": "Thu, 7 May 2020 09:31:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Andy,\nSorry for delay in review. Your earlier patches are very large and it\nrequires some time to review those. I didn't finish reviewing those\nbut here are whatever comments I have till now on the previous set of\npatches. Please see if any of those are useful to the new set.\n\n\n+/*\n+ * Return true iff there is an equal member in target for every\n+ * member in members\n\nSuggest reword: return true iff every entry in \"members\" list is also present\nin the \"target\" list. This function doesn't care about multi-sets, so please\nmention that in the prologue clearly.\n\n+\n+ if (root->parse->hasTargetSRFs)\n+ return;\n\nWhy? A relation's uniqueness may be useful well before we work on SRFs.\n\n+\n+ if (baserel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n+ /*\n+ * Set UniqueKey on member rel is useless, we have to recompute it at\n+ * upper level, see populate_partitionedrel_uniquekeys for reference\n+ */\n+ return;\n\nHandling these here might help in bottom up approach. We annotate each\npartition here and then annotate partitioned table based on the individual\npartitions. Same approach can be used for partitioned join produced by\npartitionwise join.\n\n+ /*\n+ * If the unique index doesn't contain partkey, then it is unique\n+ * on this partition only, so it is useless for us.\n+ */\n\nNot really. It could help partitionwise join.\n\n+\n+ /* Now we have the unique index list which as exactly same on all\nchildrels,\n+ * Set the UniqueIndex just like it is non-partition table\n+ */\n\nI think it's better to annotate each partition with whatever unique index it\nhas whether or not global. That will help partitionwise join, partitionwise\naggregate/group etc.\n\n+ /* A Normal group by without grouping set */\n+ if (parse->groupClause)\n+ add_uniquekey_from_sortgroups(root,\n+ grouprel,\n+ root->parse->groupClause);\n\nThose keys which are part of groupClause and also form unique keys in the input\nrelation, should be recorded as unique keys in group rel. Knowing the minimal\nset of keys allows more optimizations.\n\n+\n+ foreach(lc, unionrel->reltarget->exprs)\n+ {\n+ exprs = lappend(exprs, lfirst(lc));\n+ colnos = lappend_int(colnos, i);\n+ i++;\n+ }\n\nThis should be only possible when it's not UNION ALL. We should add some assert\nor protection for that.\n\n+\n+ /* Fast path */\n+ if (innerrel->uniquekeys == NIL || outerrel->uniquekeys == NIL)\n+ return;\n+\n+ outer_is_onerow = relation_is_onerow(outerrel);\n+ inner_is_onerow = relation_is_onerow(innerrel);\n+\n+ outerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\nouterrel);\n+ innerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\ninnerrel);\n+\n+ clause_list = gather_mergeable_joinclauses(joinrel, outerrel, innerrel,\n+ restrictlist, jointype);\n\nSomething similar happens in select_mergejoin_clauses(). At least from the\nfirst reading of this patch, I get an impression that all these functions which\nare going through clause lists and index lists should be merged into other\nfunctions which go through these lists hunting for some information useful to\nthe optimizer.\n\n+\n+\n+ if (innerrel_keeps_unique(root, outerrel, innerrel, clause_list, false))\n+ {\n+ foreach(lc, innerrel_ukey_ctx)\n+ {\n+ UniqueKeyContext ctx = (UniqueKeyContext)lfirst(lc);\n+ if (!list_is_subset(ctx->uniquekey->exprs,\njoinrel->reltarget->exprs))\n+ {\n+ /* The UniqueKey on baserel is not useful on the joinrel */\n\nA joining relation need not be a base rel always, it could be a join rel as\nwell.\n\n+ ctx->useful = false;\n+ continue;\n+ }\n+ if ((jointype == JOIN_LEFT || jointype == JOIN_FULL) &&\n!ctx->uniquekey->multi_nullvals)\n+ {\n+ /* Change the multi_nullvals to true at this case */\n\nNeed a comment explaining this. Generally, I feel, this and other functions in\nthis file need good comments explaining the logic esp. \"why\" instead of \"what\".\n\n+ else if (inner_is_onerow)\n+ {\n+ /* Since rows in innerrel can't be duplicated AND if\ninnerrel is onerow,\n+ * the join result will be onerow also as well. Note:\nonerow implies\n+ * multi_nullvals = false.\n\nI don't understand this comment. Why is there only one row in the other\nrelation which can join to this row?\n\n+ }\n+ /*\n+ * Calculate max_colno in subquery. In fact we can check this with\n+ * list_length(sub_final_rel->reltarget->exprs), However, reltarget\n+ * is not set on UPPERREL_FINAL relation, so do it this way\n+ */\n\n\nShould/can we use the same logic to convert an expression in the subquery into\na Var of the outer query as in convert_subquery_pathkeys(). If the subquery\ndoesn't have a reltarget set, we should be able to use reltarget of any of its\npaths since all of those should match the positions across subquery and the\nouter query.\n\nWill continue reviewing your new set of patches as time permits.\n\nOn Thu, May 7, 2020 at 7:02 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> I just uploaded the v7 version and split it into smaller commits for easier\n> review/merge. I also maintain a up-to-date README.uniquekey\n> document since something may changed during discussion or later code.\n>\n> Here is the simple introduction of each commit.\n>\n> ====\n> 1. v7-0001-Introduce-RelOptInfo-notnullattrs-attribute.patch\n>\n> This commit adds the notnullattrs to RelOptInfo, which grabs the information\n> from both catalog and user's query.\n>\n>\n> 2. v7-0002-Introuduce-RelOptInfo.uniquekeys-attribute.patch\n>\n> This commit just add the uniquekeys to RelOptInfo and maintain it at every\n> stage. However the upper level code is not changed due to this.\n>\n> Some changes of this part in v7:\n> 1). Removed the UniqueKey.positions attribute. In the past it is used in\n> convert_subquery_uniquekeys, however we don't need it actually (And I\n> maintained it wrong in the past). Now I build the relationship between the\n> outer var to subuqery's TargetList with outrel.subquery.processed_tlist.\n> 2). onerow UniqueKey(exprs = NIL) need to be converted to normal uniquekey(exprs\n> != NIL) if it is not one-row any more. This may happen on some outer join.\n>\n>\n> 3. v7-0003-Refactor-existing-uniqueness-related-code-to-use-.patch\n>\n> Refactor the existing functions like innerrel_is_unique/res_is_distinct_for to\n> use UniqueKey, and postpone the call of remove_useless_join and\n> reduce_unique_semijoins to use the new implementation.\n>\n> 4. v7-0004-Remove-distinct-node-AggNode-if-the-input-is-uniq.patch\n>\n> Remove the distinct node if the result is distinct already. Remove the aggnode\n> if the group by clause is unique already AND there is no aggregation function in\n> query.\n>\n> 5. v7-0005-If-the-group-by-clause-is-unique-and-we-have-aggr.patch\n>\n> If the group by clause is unique and query has aggregation function, we use\n> the AGG_SORT strategy but without really sort since it has only one row in each\n> group.\n>\n>\n> 6. v7-0006-Join-removal-at-run-time-with-UniqueKey.patch\n>\n> This commit run join removal at build_join_rel. At that time, it can fully uses\n> unique key. It can handle some more cases, I added some new test cases to\n> join.sql. However it can be a replacement of the current one. There are some\n> cases the new strategy can work run well but the current one can. Like\n>\n> SELECT a.* FROM a LEFT JOIN (b left join c on b.c_id = c.id) ON (a.b_id = b.id);\n>\n> during the join a & b, the join can't be removed since b.id is still useful in\n> future. However in the future, we know the b.id can be removed as well, but\n> it is too late to remove the previous join.\n>\n> At the implementation part, the main idea is if the join_canbe_removed. we\n> will copy the pathlist from outerrel to joinrel. There are several items need to\n> handle.\n>\n> 1. To make sure the overall join_search_one_level, we have to keep the joinrel\n> even the innerrel is removed (rather than discard the joinrel).\n> 2. If the innerrel can be removed, we don't need to build pathlist for joinrel,\n> we just reuse the pathlist from outerrel. However there are many places where\n> use assert rel->pathlist[*]->parent == rel. so I copied the pathlist, we\n> have to change the parent to joinrel.\n> 3. During create plan for some path on RTE_RELATION, it needs to know the\n> relation Oid with path->parent->relid. so we have to use the outerrel->relid\n> to overwrite the joinrel->relid which is 0 before.\n> 4. Almost same paths as item 3, it usually assert best_path->parent->rtekind ==\n> RTE_RELATION; now the path may appeared in joinrel, so I used\n> outerrel->rtekind to overwrite joinrel->rtekind.\n> 5. I guess there are some dependencies between path->pathtarget and\n> rel->reltarget. since we reuse the pathlist of outerrel, so I used the\n> outer->reltarget as well. If the join can be removed, I guess the length of\n> list_length(outrel->reltarget->exprs) >= (joinrel->reltarget->exprs). we can\n> rely on the ProjectionPath to reduce the tlist.\n>\n> My patches is based on the current latest commit fb544735f1.\n>\n> Best Regards\n> Andy Fan\n\n\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 7 May 2020 16:56:27 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Ashutosh:\n\nAppreciate for your comments!\n\nOn Thu, May 7, 2020 at 7:26 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> Hi Andy,\n> Sorry for delay in review.\n\n\nI understand no one has obligation to do that, and it must take\nreviewer's time\nand more, so really appreciated for it! Hope I can provide more kindly\nhelp like\nthis in future as well.\n\n\n> Your earlier patches are very large and it\n> requires some time to review those. I didn't finish reviewing those\n> but here are whatever comments I have till now on the previous set of\n> patches. Please see if any of those are useful to the new set.\n>\n> Yes, I just realized the size as well, so I split them to smaller commit\nand\neach commit and be build and run make check successfully.\n\nAll of your comments still valid except the last one\n(covert_subquery_uniquekeys)\nwhich has been fixed v7 version.\n\n\n>\n> +/*\n> + * Return true iff there is an equal member in target for every\n> + * member in members\n>\n> Suggest reword: return true iff every entry in \"members\" list is also\n> present\n> in the \"target\" list.\n\n\nWill do, thanks!\n\n\n> This function doesn't care about multi-sets, so please\n> mention that in the prologue clearly.\n>\n> +\n> + if (root->parse->hasTargetSRFs)\n> + return;\n>\n> Why? A relation's uniqueness may be useful well before we work on SRFs.\n>\n>\nLooks I misunderstand when the SRF function is executed. I will fix this\nin v8.\n\n+\n> + if (baserel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n> + /*\n> + * Set UniqueKey on member rel is useless, we have to recompute\n> it at\n> + * upper level, see populate_partitionedrel_uniquekeys for\n> reference\n> + */\n> + return;\n>\n> Handling these here might help in bottom up approach. We annotate each\n> partition here and then annotate partitioned table based on the individual\n> partitions. Same approach can be used for partitioned join produced by\n> partitionwise join.\n>\n> + /*\n> + * If the unique index doesn't contain partkey, then it is unique\n> + * on this partition only, so it is useless for us.\n> + */\n>\n> Not really. It could help partitionwise join.\n>\n> +\n> + /* Now we have the unique index list which as exactly same on all\n> childrels,\n> + * Set the UniqueIndex just like it is non-partition table\n> + */\n>\n> I think it's better to annotate each partition with whatever unique index\n> it\n> has whether or not global. That will help partitionwise join, partitionwise\n> aggregate/group etc.\n>\n\nExcellent catch! All the 3 items above is partitionwise join related, I\nneed some time\nto check how to handle that.\n\n\n>\n> + /* A Normal group by without grouping set */\n> + if (parse->groupClause)\n> + add_uniquekey_from_sortgroups(root,\n> + grouprel,\n> + root->parse->groupClause);\n>\n> Those keys which are part of groupClause and also form unique keys in the\n> input\n> relation, should be recorded as unique keys in group rel. Knowing the\n> minimal\n> set of keys allows more optimizations.\n>\n\nThis is a very valid point now. I ignored this because I wanted to remove\nthe AggNode\ntotally if the part of groupClause is unique, However it doesn't happen\nlater if there is\naggregation call in this query.\n\n\n> +\n> + foreach(lc, unionrel->reltarget->exprs)\n> + {\n> + exprs = lappend(exprs, lfirst(lc));\n> + colnos = lappend_int(colnos, i);\n> + i++;\n> + }\n>\n> This should be only possible when it's not UNION ALL. We should add some\n> assert\n> or protection for that.\n>\n\nOK, actually I called this function in generate_union_paths. which handle\nUNION case only. I will add the Assert anyway.\n\n\n>\n> +\n> + /* Fast path */\n> + if (innerrel->uniquekeys == NIL || outerrel->uniquekeys == NIL)\n> + return;\n> +\n> + outer_is_onerow = relation_is_onerow(outerrel);\n> + inner_is_onerow = relation_is_onerow(innerrel);\n> +\n> + outerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\n> outerrel);\n> + innerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\n> innerrel);\n> +\n> + clause_list = gather_mergeable_joinclauses(joinrel, outerrel,\n> innerrel,\n> + restrictlist, jointype);\n>\n> Something similar happens in select_mergejoin_clauses().\n\n\nI didn't realized this before. I will refactor this code accordingly if\nnecessary\nafter reading that.\n\n\n> At least from the\n> first reading of this patch, I get an impression that all these functions\n> which\n> are going through clause lists and index lists should be merged into other\n> functions which go through these lists hunting for some information useful\n> to\n> the optimizer.\n>\n+\n> +\n> + if (innerrel_keeps_unique(root, outerrel, innerrel, clause_list,\n> false))\n> + {\n> + foreach(lc, innerrel_ukey_ctx)\n> + {\n> + UniqueKeyContext ctx = (UniqueKeyContext)lfirst(lc);\n> + if (!list_is_subset(ctx->uniquekey->exprs,\n> joinrel->reltarget->exprs))\n> + {\n> + /* The UniqueKey on baserel is not useful on the joinrel\n> */\n>\n> A joining relation need not be a base rel always, it could be a join rel as\n> well.\n>\n\ngood catch.\n\n\n>\n> + ctx->useful = false;\n> + continue;\n> + }\n> + if ((jointype == JOIN_LEFT || jointype == JOIN_FULL) &&\n> !ctx->uniquekey->multi_nullvals)\n> + {\n> + /* Change the multi_nullvals to true at this case */\n>\n> Need a comment explaining this. Generally, I feel, this and other\n> functions in\n> this file need good comments explaining the logic esp. \"why\" instead of\n> \"what\".\n\n\nExactly.\n\n>\n>\n+ else if (inner_is_onerow)\n> + {\n> + /* Since rows in innerrel can't be duplicated AND if\n> innerrel is onerow,\n> + * the join result will be onerow also as well. Note:\n> onerow implies\n> + * multi_nullvals = false.\n>\n> I don't understand this comment. Why is there only one row in the other\n> relation which can join to this row?\n>\n\nI guess you may miss the onerow special case if I understand correctly.\ninner_is_onerow means something like \"SELECT xx FROM t1 where uk = 1\".\ninnerrel can't be duplicated means: t1.y = t2.pk; so the finally result\nis onerow\nas well. One of the overall query is SELECT .. FROM t1, t2 where t2.y =\nt2.pk;\n\n\nI explained more about onerow in the v7 README.unqiuekey document, just\ncopy\nit here.\n\n===\n1. What is UniqueKey?\n....\nonerow is also a kind of UniqueKey which means the RelOptInfo will have 1\nrow at\nmost. it has a stronger semantic than others. like SELECT uk FROM t; uk is\nnormal unique key and may have different values.\nSELECT colx FROM t WHERE uk = const. colx is unique AND we have only 1\nvalue. This\nfield can used for innerrel_is_unique. and also be used as an optimization\nfor\nthis case. We don't need to maintain multi UniqueKey, we just maintain one\nwith\nonerow = true and exprs = NIL.\n\nonerow is set to true only for 2 cases right now. 1) SELECT .. FROM t WHERE\nuk =\n1; 2). SELECT aggref(xx) from t; // Without group by.\n===\n\n===\n2. How is it maintained?\n....\nMore considerations about onerow:\n1. If relation with one row and it can't be duplicated, it is still possible\n contains mulit_nullvas after outer join.\n2. If the either UniqueKey can be duplicated after join, the can get one row\n only when both side is one row AND there is no outer join.\n3. Whenever the onerow UniqueKey is not a valid any more, we need to\nconvert one\n row UniqueKey to normal unique key since we don't store exprs for one-row\n relation. get_exprs_from_uniquekeys will be used here.\n===\n\nand 3. in the v7 implementation, the code struct is more clearer:)\n\n\n\n>\n> + }\n> + /*\n> + * Calculate max_colno in subquery. In fact we can check this with\n> + * list_length(sub_final_rel->reltarget->exprs), However, reltarget\n> + * is not set on UPPERREL_FINAL relation, so do it this way\n> + */\n>\n>\n> Should/can we use the same logic to convert an expression in the subquery\n> into\n> a Var of the outer query as in convert_subquery_pathkeys().\n\n\nYes, my previous implementation is actually wrong. and should be fixed it\nin v7.\n\n\n> If the subquery doesn't have a reltarget set, we should be able to use\n> reltarget\n\nof any of its paths since all of those should match the positions across\n> subquery\n\nand the outer query.\n>\n\nbut I think it should be rel->subroot->processed_tlist rather than\nreltarget? Actually I still\na bit of uneasy about rel->subroot->processed_tlist for some DML case,\nwhich the\nprocessed_tlist is different and I still not figure out its impact.\n\n\n> Will continue reviewing your new set of patches as time permits.\n>\n\nThank you! Actually there is no big difference between v6 and v7 regarding\nthe\n UniqueKey part except 2 bug fix. However v7 has some more documents,\ncomments improvement and code refactor/split, which may be helpful\nfor review. You may try v7 next time if v8 has not come yet:)\n\nBest Regards\nAndy Fan\n\nHi Ashutosh:Appreciate for your comments!On Thu, May 7, 2020 at 7:26 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:Hi Andy,\nSorry for delay in review. I  understand no one has obligation to do that,  and it must take reviewer's time and more, so really appreciated for it!  Hope I can provide more kindly help likethis in future as well.  Your earlier patches are very large and it\nrequires some time to review those. I didn't finish reviewing those\nbut here are whatever comments I have till now on the previous set of\npatches. Please see if any of those are useful to the new set.\nYes, I just realized the size as well, so I split them to smaller commit and each commit and be build and run make check successfully.  All of your comments still valid except the last one (covert_subquery_uniquekeys) which has been fixed v7 version.  \n\n+/*\n+ * Return true iff there is an equal member in target for every\n+ * member in members\n\nSuggest reword: return true iff every entry in \"members\" list is also present\nin the \"target\" list. Will do, thanks! This function doesn't care about multi-sets, so please\nmention that in the prologue clearly.\n\n+\n+    if (root->parse->hasTargetSRFs)\n+        return;\n\nWhy? A relation's uniqueness may be useful well before we work on SRFs.\nLooks I misunderstand when the SRF function is executed.  I will fix this in v8. \n+\n+    if (baserel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n+        /*\n+         * Set UniqueKey on member rel is useless, we have to recompute it at\n+         * upper level, see populate_partitionedrel_uniquekeys for reference\n+         */\n+        return;\n\nHandling these here might help in bottom up approach. We annotate each\npartition here and then annotate partitioned table based on the individual\npartitions. Same approach can be used for partitioned join produced by\npartitionwise join.\n\n+        /*\n+         * If the unique index doesn't contain partkey, then it is unique\n+         * on this partition only, so it is useless for us.\n+         */\n\nNot really. It could help partitionwise join.\n\n+\n+    /* Now we have the unique index list which as exactly same on all\nchildrels,\n+     * Set the UniqueIndex just like it is non-partition table\n+     */\n\nI think it's better to annotate each partition with whatever unique index it\nhas whether or not global. That will help partitionwise join, partitionwise\naggregate/group etc.Excellent catch! All the 3 items above is partitionwise join related, I need some timeto check how to handle that.  \n\n+    /* A Normal group by without grouping set */\n+    if (parse->groupClause)\n+        add_uniquekey_from_sortgroups(root,\n+                                      grouprel,\n+                                      root->parse->groupClause);\n\nThose keys which are part of groupClause and also form unique keys in the input\nrelation, should be recorded as unique keys in group rel. Knowing the minimal\nset of keys allows more optimizations.This is a very valid point now. I ignored this because I wanted to remove the AggNodetotally if the part of groupClause is unique,  However it doesn't happen later if there isaggregation call in this query.\n\n+\n+    foreach(lc,  unionrel->reltarget->exprs)\n+    {\n+        exprs = lappend(exprs, lfirst(lc));\n+        colnos = lappend_int(colnos, i);\n+        i++;\n+    }\n\nThis should be only possible when it's not UNION ALL. We should add some assert\nor protection for that.OK, actually I called this function in generate_union_paths. which handleUNION case only.  I will add the Assert anyway.  \n\n+\n+    /* Fast path */\n+    if (innerrel->uniquekeys == NIL || outerrel->uniquekeys == NIL)\n+        return;\n+\n+    outer_is_onerow = relation_is_onerow(outerrel);\n+    inner_is_onerow = relation_is_onerow(innerrel);\n+\n+    outerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\nouterrel);\n+    innerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\ninnerrel);\n+\n+    clause_list = gather_mergeable_joinclauses(joinrel, outerrel, innerrel,\n+                                               restrictlist, jointype);\n\nSomething similar happens in select_mergejoin_clauses(). I didn't realized this before.  I will refactor this code accordingly if necessaryafter reading that.  At least from the\nfirst reading of this patch, I get an impression that all these functions which\nare going through clause lists and index lists should be merged into other\nfunctions which go through these lists hunting for some information useful to\nthe optimizer. \n+\n+\n+    if (innerrel_keeps_unique(root, outerrel, innerrel, clause_list, false))\n+    {\n+        foreach(lc, innerrel_ukey_ctx)\n+        {\n+            UniqueKeyContext ctx = (UniqueKeyContext)lfirst(lc);\n+            if (!list_is_subset(ctx->uniquekey->exprs,\njoinrel->reltarget->exprs))\n+            {\n+                /* The UniqueKey on baserel is not useful on the joinrel */\n\nA joining relation need not be a base rel always, it could be a join rel as\nwell.good catch.  \n\n+                ctx->useful = false;\n+                continue;\n+            }\n+            if ((jointype == JOIN_LEFT || jointype == JOIN_FULL) &&\n!ctx->uniquekey->multi_nullvals)\n+            {\n+                /* Change the multi_nullvals to true at this case */\n\nNeed a comment explaining this. Generally, I feel, this and other functions in\nthis file need good comments explaining the logic esp. \"why\" instead of \"what\". Exactly.   \n+            else if (inner_is_onerow)\n+            {\n+                /* Since rows in innerrel can't be duplicated AND if\ninnerrel is onerow,\n+                 * the join result will be onerow also as well. Note:\nonerow implies\n+                 * multi_nullvals = false.\n\nI don't understand this comment. Why is there only one row in the other\nrelation which can join to this row?I guess you may miss the onerow special case if I understand correctly. inner_is_onerow means something like \"SELECT xx FROM t1 where uk = 1\".innerrel can't be duplicated means:  t1.y = t2.pk;  so the finally result is onerow as well.  One of the overall query is  SELECT .. FROM t1, t2 where t2.y = t2.pk; I explained more about onerow in the v7 README.unqiuekey document, just copy it here. ===1. What is UniqueKey?.... onerow is also a kind of UniqueKey which means the RelOptInfo will have 1 row atmost. it has a stronger semantic than others. like SELECT uk FROM t; uk isnormal unique key and may have different values.SELECT colx FROM t WHERE uk = const.  colx is unique AND we have only 1 value. Thisfield can used for innerrel_is_unique. and also be used as an optimization forthis case. We don't need to maintain multi UniqueKey, we just maintain one withonerow = true and exprs = NIL.onerow is set to true only for 2 cases right now. 1) SELECT .. FROM t WHERE uk =1; 2). SELECT aggref(xx) from t; // Without group by.======2. How is it maintained?....More considerations about onerow:1. If relation with one row and it can't be duplicated, it is still possible   contains mulit_nullvas after outer join.2. If the either UniqueKey can be duplicated after join, the can get one row   only when both side is one row AND there is no outer join.3. Whenever the onerow UniqueKey is not a valid any more, we need to convert one   row UniqueKey to normal unique key since we don't store exprs for one-row   relation. get_exprs_from_uniquekeys will be used here.===and 3. in the v7 implementation,  the code struct is more clearer:)   \n\n+    }\n+    /*\n+     * Calculate max_colno in subquery. In fact we can check this with\n+     * list_length(sub_final_rel->reltarget->exprs), However, reltarget\n+     * is not set on UPPERREL_FINAL relation, so do it this way\n+     */\n\n\nShould/can we use the same logic to convert an expression in the subquery into\na Var of the outer query as in convert_subquery_pathkeys(). Yes,  my previous implementation is actually wrong. and should be  fixed it in v7.  If the subquery doesn't have a reltarget set, we should be able to use reltarget of any of its paths since all of those should match the positions across subquery  and the outer query.but I think it should be rel->subroot->processed_tlist rather than reltarget?  Actually I stilla bit of uneasy about  rel->subroot->processed_tlist for some DML case, which the processed_tlist is different and I still not figure out its impact. \n\nWill continue reviewing your new set of patches as time permits.Thank you!  Actually there is no big difference between v6 and v7 regarding the UniqueKey part except 2 bug fix.  However v7 has some more documents, comments improvement and code refactor/split, which may be helpfulfor review. You may try v7 next time if v8 has not come yet:) Best RegardsAndy Fan", "msg_date": "Fri, 8 May 2020 09:57:24 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "The attached is the v8-patches. The main improvements are based on\nAshutosh's\nreview (reduce the SRF impact and partition level UniqueKey). I also update\nthe\nREADME.uniquekey based on the discussion. So anyone who don't want to go\nthrough the long email can read the README.uniquekey first.\n\n===\nJust copy some content from the README for easy discussion.\n\nAs for inherit table, we maintain the UnqiueKey on childrel as usual. But\nfor\npartitioned table we need to maintain 2 different kinds of UnqiueKey.\n1). UniqueKey on the parent relation 2). UniqueKey on child relation for\npartition wise query.\n\nExample:\nCREATE TABLE p (a int not null, b int not null) partition by list (a);\nCREATE TABLE p0 partition of p for values in (1);\nCREATE TABLE p1 partition of p for values in (2);\n\ncreate unique index p0_b on p0(b);\ncreate unique index p1_b on p1(b);\n\nNow b is unique on partition level only, so the distinct can't be removed on\nthe following cases. SELECT DISTINCT b FROM p; However for query\nSELECT DISTINCT b FROM p WHERE a = 1; where only one\npartition is chosen, the UniqueKey on child relation is same as the\nUniqueKey\non parent relation. The distinct can be removed.\n\nAnother usage of UniqueKey on partition level is it be helpful for\npartition-wise join.\n\nAs for the UniqueKey on parent table level, it comes with 2 different ways.\n\n1). the UniqueKey is also derived in Unique index, but the index must be\nsame\nin all the related children relations and the unique index must contains\nPartition Key in it. Example:\n\nCREATE UNIQUE INDEX p_ab ON p(a, b); -- where a is the partition key.\n\n-- Query\nSELECT a, b FROM p; -- the (a, b) is a UniqueKey of p.\n\n 2). If the parent relation has only one childrel, the UniqueKey on\nchildrel is\n the UniqueKey on parent as well.\n\nThe patch structure is not changed, you can see [1] for reference. The\npatches is\nbased on latest commit ac3a4866c0.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWr1BmbQB4F7j22G%2BNS4dNuem6dKaUf%2B1BK8me61uBgqqg%40mail.gmail.com\n\n\nBest Regards\nAndy Fan\n\n>", "msg_date": "Wed, 13 May 2020 19:59:08 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, May 8, 2020 at 7:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>> + else if (inner_is_onerow)\n>> + {\n>> + /* Since rows in innerrel can't be duplicated AND if\n>> innerrel is onerow,\n>> + * the join result will be onerow also as well. Note:\n>> onerow implies\n>> + * multi_nullvals = false.\n>>\n>> I don't understand this comment. Why is there only one row in the other\n>> relation which can join to this row?\n>\n>\n> I guess you may miss the onerow special case if I understand correctly.\n> inner_is_onerow means something like \"SELECT xx FROM t1 where uk = 1\".\n> innerrel can't be duplicated means: t1.y = t2.pk; so the finally result is onerow\n> as well. One of the overall query is SELECT .. FROM t1, t2 where t2.y = t2.pk;\n>\n>\n> I explained more about onerow in the v7 README.unqiuekey document, just copy\n> it here.\nFor some reason this mail remained in my drafts without being sent.\nSending it now. Sorry.\n\nMy impression about the one row stuff, is that there is too much\nspecial casing around it. We should somehow structure the UniqueKey\ndata so that one row unique keys come naturally rather than special\ncased. E.g every column in such a case is unique in the result so\ncreate as many UniqueKeys are the number of columns or create one\nunique key with no column as you have done but handle it more\ngracefully rather than spreading it all over the place.\n\nAlso, the amount of code that these patches changes seems to be much\nlarger than the feature's worth arguably. But it indicates that we are\nmodifying/adding more code than necessary. Some of that code can be\nmerged into existing code which does similar things as I have pointed\nout in my previous comment.\n\nThanks for working on the expanded scope of the initial feature you\nproposed. But it makes the feature more useful, I think.\n\n--\nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Wed, 13 May 2020 17:34:17 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, May 13, 2020 at 8:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>\nwrote:\n\n> On Fri, May 8, 2020 at 7:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> >> + else if (inner_is_onerow)\n> >> + {\n> >> + /* Since rows in innerrel can't be duplicated AND if\n> >> innerrel is onerow,\n> >> + * the join result will be onerow also as well. Note:\n> >> onerow implies\n> >> + * multi_nullvals = false.\n> >>\n> >> I don't understand this comment. Why is there only one row in the other\n> >> relation which can join to this row?\n> >\n> >\n> > I guess you may miss the onerow special case if I understand correctly.\n> > inner_is_onerow means something like \"SELECT xx FROM t1 where uk = 1\".\n> > innerrel can't be duplicated means: t1.y = t2.pk; so the finally\n> result is onerow\n> > as well. One of the overall query is SELECT .. FROM t1, t2 where t2.y\n> = t2.pk;\n> >\n> >\n> > I explained more about onerow in the v7 README.unqiuekey document, just\n> copy\n> > it here.\n> For some reason this mail remained in my drafts without being sent.\n> Sending it now. Sorry.\n>\n> My impression about the one row stuff, is that there is too much\n> special casing around it. We should somehow structure the UniqueKey\n> data so that one row unique keys come naturally rather than special\n> cased. E.g every column in such a case is unique in the result so\n> create as many UniqueKeys are the number of columns\n\n\nThis is the beginning state of the UniqueKey, later David suggested\nthis as an optimization[1], I buy-in the idea and later I found it mean\nmore than the original one [2], so I think onerow is needed actually.\n\n\n> or create one\n> unique key with no column as you have done but handle it more\n> gracefully rather than spreading it all over the place.\n\n\nI think this is what I do now, but it is possible that I spread it more than\nnecessary, if so, please let me know. I maintained the README.uniquekey\ncarefully since v7 and improved a lot in v8, it may be a good place to check\nit.\n\n\n>\nAlso, the amount of code that these patches changes seems to be much\n> larger than the feature's worth arguably. But it indicates that we are\n> modifying/adding more code than necessary. Some of that code can be\n> merged into existing code which does similar things as I have pointed\n> out in my previous comment.\n>\n>\nI have reused the code select_mergejoin_clause rather than maintaining my\nown copies in v8. Thanks a lot about that suggestion. This happened mainly\nbecause I didn't read enough code. I will go through more to see if I have\nsimilar\nissues.\n\n\n> Thanks for working on the expanded scope of the initial feature you\n> proposed. But it makes the feature more useful, I think.\n>\n> That's mainly because your suggestions are always insightful which makes me\nwilling to continue to work on it, so thank you all!\n\n===\nIn fact, I was hesitated that how to reply an email when I send an new\nversion\nof the patch. One idea is I should explain clear what is the difference\nbetween Vn\nand Vn-1. The other one is not many people read the Vn-1, so I would like\nto keep\nthe email simplified and keep the README clearly to save the reviewer's\ntime.\nActually there are more changes in v8 than I stated above. for example:\nfor the\nUniqueKey on baserelation, we will reduce the expr from the UniqueKey if\nthe\nexpr is a const. Unique on (A, B). query is SELECT b FROM t WHERE a =\n1;\nin v7, the UniqueKey is (a, b). In v8, the UniqueKey is (b) only. But\nsince most\npeople still not start to read it, so I add such information to README\nrather than\necho the same in email thread. I will try more to understand how to\ncommunicate more\nsmooth. But any suggestion on this part is appreciated.\n\nBest Regards\nAndy Fan\n\nOn Wed, May 13, 2020 at 8:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:On Fri, May 8, 2020 at 7:27 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>> +            else if (inner_is_onerow)\n>> +            {\n>> +                /* Since rows in innerrel can't be duplicated AND if\n>> innerrel is onerow,\n>> +                 * the join result will be onerow also as well. Note:\n>> onerow implies\n>> +                 * multi_nullvals = false.\n>>\n>> I don't understand this comment. Why is there only one row in the other\n>> relation which can join to this row?\n>\n>\n> I guess you may miss the onerow special case if I understand correctly.\n> inner_is_onerow means something like \"SELECT xx FROM t1 where uk = 1\".\n> innerrel can't be duplicated means:  t1.y = t2.pk;  so the finally result is onerow\n> as well.  One of the overall query is  SELECT .. FROM t1, t2 where t2.y = t2.pk;\n>\n>\n> I explained more about onerow in the v7 README.unqiuekey document, just copy\n> it here.\nFor some reason this mail remained in my drafts without being sent.\nSending it now. Sorry.\n\nMy impression about the one row stuff, is that there is too much\nspecial casing around it. We should somehow structure the UniqueKey\ndata so that one row unique keys come naturally rather than special\ncased. E.g every column in such a case is unique in the result so\ncreate as many UniqueKeys are the number of columns This is the beginning state of the UniqueKey,  later David suggestedthis as an optimization[1], I buy-in the idea and later I found it meanmore than the original one [2], so I think onerow is needed actually.   or create one\nunique key with no column as you have done but handle it more\ngracefully rather than spreading it all over the place.I think this is what I do now, but it is possible that I spread it more thannecessary, if so, please let me know.  I maintained the README.uniquekeycarefully since v7 and improved a lot in v8, it may be a good place to checkit.  \nAlso, the amount of code that these patches changes seems to be much\nlarger than the feature's worth arguably. But it indicates that we are\nmodifying/adding more code than necessary. Some of that code can be\nmerged into existing code which does similar things as I have pointed\nout in my previous comment.\n I have reused the code select_mergejoin_clause rather than maintaining myown copies in v8. Thanks a lot about that suggestion.  This happened mainlybecause I didn't read enough code.  I will go through more to see if I have similarissues.  \nThanks for working on the expanded scope of the initial feature you\nproposed. But it makes the feature more useful, I think.\nThat's mainly because your suggestions are always insightful which makes mewilling to continue to work on it, so thank you all!===In fact, I was hesitated that how to reply an email when I send an new versionof the patch.  One idea is I should explain clear what is the difference between Vnand Vn-1.  The other one is not many people read the Vn-1, so I would like to keepthe email simplified and keep the README clearly to save the reviewer's time. Actually there are more changes in v8 than I stated above. for example:  for theUniqueKey on baserelation,  we will reduce the expr from the UniqueKey if the expr is a const.   Unique on (A, B).   query is SELECT b FROM t WHERE a = 1;   in v7, the UniqueKey is (a, b).  In v8, the UniqueKey is (b) only.  But since mostpeople still not start to read it, so I add such  information to README rather than echo the same in email thread.  I will try more to understand how to communicate moresmooth.  But any suggestion on this part is appreciated. Best RegardsAndy Fan", "msg_date": "Wed, 13 May 2020 23:48:25 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Ashutosh:\n\nAll your suggestions are followed except the UNION ALL one. I replied the\nreason\nbelow. For the suggestions about partitioned table, looks lot of cases to\nhandle, so\nI summarize/enrich your idea in README and email thread, we can continue\nto talk about that.\n\n\n\n>> +\n>> + foreach(lc, unionrel->reltarget->exprs)\n>> + {\n>> + exprs = lappend(exprs, lfirst(lc));\n>> + colnos = lappend_int(colnos, i);\n>> + i++;\n>> + }\n>>\n>> This should be only possible when it's not UNION ALL. We should add some\n>> assert\n>> or protection for that.\n>>\n>\n> OK, actually I called this function in generate_union_paths. which handle\n> UNION case only. I will add the Assert anyway.\n>\n>\n\nFinally I found I have to add one more parameter to\npopulate_unionrel_uniquekeys, and\nthe only usage of that parameter is used to Assert, so I didn't do that at\nlast.\n\n\n>\n>> +\n>> + /* Fast path */\n>> + if (innerrel->uniquekeys == NIL || outerrel->uniquekeys == NIL)\n>> + return;\n>> +\n>> + outer_is_onerow = relation_is_onerow(outerrel);\n>> + inner_is_onerow = relation_is_onerow(innerrel);\n>> +\n>> + outerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\n>> outerrel);\n>> + innerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\n>> innerrel);\n>> +\n>> + clause_list = gather_mergeable_joinclauses(joinrel, outerrel,\n>> innerrel,\n>> + restrictlist, jointype);\n>>\n>> Something similar happens in select_mergejoin_clauses().\n>\n>\n> I didn't realized this before. I will refactor this code accordingly if\n> necessary\n> after reading that.\n>\n>\n\n I reused select_mergejoin_clauses and removed the duplicated code in\nuniquekeys.c\nin v8.\n\nAt least from the\n>> first reading of this patch, I get an impression that all these functions\n>> which\n>> are going through clause lists and index lists should be merged into other\n>> functions which go through these lists hunting for some information\n>> useful to\n>> the optimizer.\n>>\n> +\n>> +\n>> + if (innerrel_keeps_unique(root, outerrel, innerrel, clause_list,\n>> false))\n>> + {\n>> + foreach(lc, innerrel_ukey_ctx)\n>> + {\n>> + UniqueKeyContext ctx = (UniqueKeyContext)lfirst(lc);\n>> + if (!list_is_subset(ctx->uniquekey->exprs,\n>> joinrel->reltarget->exprs))\n>> + {\n>> + /* The UniqueKey on baserel is not useful on the joinrel\n>> */\n>>\n>> A joining relation need not be a base rel always, it could be a join rel\n>> as\n>> well.\n>>\n>\n> good catch.\n>\n\nFixed.\n\n\n>\n>\n>>\n>> + ctx->useful = false;\n>> + continue;\n>> + }\n>> + if ((jointype == JOIN_LEFT || jointype == JOIN_FULL) &&\n>> !ctx->uniquekey->multi_nullvals)\n>> + {\n>> + /* Change the multi_nullvals to true at this case */\n>>\n>> Need a comment explaining this. Generally, I feel, this and other\n>> functions in\n>> this file need good comments explaining the logic esp. \"why\" instead of\n>> \"what\".\n>\n>\n> Exactly.\n>\n\nDone in v8.\n\n\n>> Will continue reviewing your new set of patches as time permits.\n>>\n>\n> Thank you! Actually there is no big difference between v6 and v7\n> regarding the\n> UniqueKey part except 2 bug fix. However v7 has some more documents,\n> comments improvement and code refactor/split, which may be helpful\n> for review. You may try v7 next time if v8 has not come yet:)\n>\n>\nv8 has come :)\n\nBest Regards\nAndy Fan\n\nHi Ashutosh:All your suggestions are followed except the UNION ALL one. I replied the reasonbelow.  For the suggestions about partitioned table,  looks lot of cases to handle, soI summarize/enrich your idea in README and email thread,  we can continueto talk about that.  \n+\n+    foreach(lc,  unionrel->reltarget->exprs)\n+    {\n+        exprs = lappend(exprs, lfirst(lc));\n+        colnos = lappend_int(colnos, i);\n+        i++;\n+    }\n\nThis should be only possible when it's not UNION ALL. We should add some assert\nor protection for that.OK, actually I called this function in generate_union_paths. which handleUNION case only.  I will add the Assert anyway.   Finally I found I have to add one more parameter to populate_unionrel_uniquekeys, andthe only usage of that parameter is used to Assert, so I didn't do that at last.  \n\n+\n+    /* Fast path */\n+    if (innerrel->uniquekeys == NIL || outerrel->uniquekeys == NIL)\n+        return;\n+\n+    outer_is_onerow = relation_is_onerow(outerrel);\n+    inner_is_onerow = relation_is_onerow(innerrel);\n+\n+    outerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\nouterrel);\n+    innerrel_ukey_ctx = initililze_uniquecontext_for_joinrel(joinrel,\ninnerrel);\n+\n+    clause_list = gather_mergeable_joinclauses(joinrel, outerrel, innerrel,\n+                                               restrictlist, jointype);\n\nSomething similar happens in select_mergejoin_clauses(). I didn't realized this before.  I will refactor this code accordingly if necessaryafter reading that.   I reused select_mergejoin_clauses and removed the duplicated code in uniquekeys.cin v8. At least from the\nfirst reading of this patch, I get an impression that all these functions which\nare going through clause lists and index lists should be merged into other\nfunctions which go through these lists hunting for some information useful to\nthe optimizer. \n+\n+\n+    if (innerrel_keeps_unique(root, outerrel, innerrel, clause_list, false))\n+    {\n+        foreach(lc, innerrel_ukey_ctx)\n+        {\n+            UniqueKeyContext ctx = (UniqueKeyContext)lfirst(lc);\n+            if (!list_is_subset(ctx->uniquekey->exprs,\njoinrel->reltarget->exprs))\n+            {\n+                /* The UniqueKey on baserel is not useful on the joinrel */\n\nA joining relation need not be a base rel always, it could be a join rel as\nwell.good catch. Fixed.   \n\n+                ctx->useful = false;\n+                continue;\n+            }\n+            if ((jointype == JOIN_LEFT || jointype == JOIN_FULL) &&\n!ctx->uniquekey->multi_nullvals)\n+            {\n+                /* Change the multi_nullvals to true at this case */\n\nNeed a comment explaining this. Generally, I feel, this and other functions in\nthis file need good comments explaining the logic esp. \"why\" instead of \"what\". Exactly. Done in v8. \n\nWill continue reviewing your new set of patches as time permits.Thank you!  Actually there is no big difference between v6 and v7 regarding the UniqueKey part except 2 bug fix.  However v7 has some more documents, comments improvement and code refactor/split, which may be helpfulfor review. You may try v7 next time if v8 has not come yet:) v8 has come :) Best RegardsAndy Fan", "msg_date": "Thu, 14 May 2020 00:02:42 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, 14 May 2020 at 03:48, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Wed, May 13, 2020 at 8:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>> My impression about the one row stuff, is that there is too much\n>> special casing around it. We should somehow structure the UniqueKey\n>> data so that one row unique keys come naturally rather than special\n>> cased. E.g every column in such a case is unique in the result so\n>> create as many UniqueKeys are the number of columns\n>\n>\n> This is the beginning state of the UniqueKey, later David suggested\n> this as an optimization[1], I buy-in the idea and later I found it mean\n> more than the original one [2], so I think onerow is needed actually.\n\nHaving the \"onerow\" flag was not how I intended it to work.\n\nHere's an example of how I thought it should work:\n\nAssume t1 has UniqueKeys on {a}\n\nSELECT DISTINCT a,b FROM t1;\n\nHere the DISTINCT can be a no-op due to \"a\" being unique within t1. Or\nmore basically, {a} is a subset of {a,b}.\n\nThe code which does this is relation_has_uniquekeys_for(), which\ncontains the code:\n\n+ if (list_is_subset(ukey->exprs, exprs))\n+ return true;\n\nIn this case, ukey->exprs is {a} and exprs is {a,b}. So, if the\nUniqueKey's exprs are a subset of, in this case, the DISTINCT exprs\nthen relation_has_uniquekeys_for() returns true. Basically\nlist_is_subset({a}, {a,b}), Answer: \"Yes\".\n\nFor the onerow stuff, if we can prove the relation returns only a\nsingle row, e.g an aggregate without a GROUP BY, or there are\nEquivalenceClasses with ec_has_const == true for each key of a unique\nindex, then why can't set just set the UniqueKeys to {}? That would\nmean the code to determine if we can avoid performing an explicit\nDISTINCT operation would be called with list_is_subset({}, {a,b}),\nwhich is also true, in fact, an empty set is a subset of any set. Why\nis there a need to special case that fact?\n\nIn light of those thoughts, can you explain why you think we need to\nkeep the onerow flag?\n\nDavid\n\n\n", "msg_date": "Thu, 14 May 2020 10:19:59 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, May 14, 2020 at 6:20 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 14 May 2020 at 03:48, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Wed, May 13, 2020 at 8:04 PM Ashutosh Bapat <\n> ashutosh.bapat.oss@gmail.com> wrote:\n> >> My impression about the one row stuff, is that there is too much\n> >> special casing around it. We should somehow structure the UniqueKey\n> >> data so that one row unique keys come naturally rather than special\n> >> cased. E.g every column in such a case is unique in the result so\n> >> create as many UniqueKeys are the number of columns\n> >\n> >\n> > This is the beginning state of the UniqueKey, later David suggested\n> > this as an optimization[1], I buy-in the idea and later I found it mean\n> > more than the original one [2], so I think onerow is needed actually.\n>\n> Having the \"onerow\" flag was not how I intended it to work.\n>\n> Thanks for the detailed explanation. So I think we do need to handle\nonerow\nspecially, (It means more things than adding each column as a UniqueKey).\nbut we don't need the onerow flag since we can tell it by ukey->exprs ==\nNIL.\n\nDuring the developer of this feature, I added some Asserts as double\nchecking\nfor onerow and exprs. it helps me to find some special cases. like\nSELECT FROM multirows union SELECT FROM multirows; where targetlist is\nNIL.\n(I find the above case returns onerow as well just now). so onerow flag\nallows us\ncheck this special things with more attention. Even this is not the\noriginal intention\nbut looks it is the one purpose now.\n\nHowever I am feeling that removing onerow flag doesn't require much of code\nchanges. Almost all the special cases which are needed before are still\nneeded\nafter that and all the functions based on that like relation_is_onerow\n/add_uniquekey_onerow is still valid, we just need change the\nimplementation.\nso do you think we need to remove onerow flag totally?\n\nBest Regards\nAndy Fan\n\nOn Thu, May 14, 2020 at 6:20 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 14 May 2020 at 03:48, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Wed, May 13, 2020 at 8:04 PM Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> wrote:\n>> My impression about the one row stuff, is that there is too much\n>> special casing around it. We should somehow structure the UniqueKey\n>> data so that one row unique keys come naturally rather than special\n>> cased. E.g every column in such a case is unique in the result so\n>> create as many UniqueKeys are the number of columns\n>\n>\n> This is the beginning state of the UniqueKey,  later David suggested\n> this as an optimization[1], I buy-in the idea and later I found it mean\n> more than the original one [2], so I think onerow is needed actually.\n\nHaving the \"onerow\" flag was not how I intended it to work.Thanks for the detailed explanation.  So I think we do need to handle onerowspecially, (It means more things than adding each column as a UniqueKey). but we don't need the onerow flag since we can tell it by ukey->exprs == NIL.  During the developer of this feature,  I added some Asserts as double checkingfor onerow and exprs.  it helps me to find some special cases. like SELECT FROM multirows  union SELECT  FROM multirows; where targetlist is NIL.  (I find the above case returns onerow as well just now).  so onerow flag allows uscheck this special things with more attention. Even this is not the original intentionbut looks it is the one purpose now. However I am feeling that removing onerow flag doesn't require much of codechanges. Almost all the special cases which are needed before are still neededafter that and all the functions based on that like relation_is_onerow/add_uniquekey_onerow is still valid, we just need change the implementation. so do you think we need to remove onerow flag totally? Best RegardsAndy Fan", "msg_date": "Thu, 14 May 2020 10:38:44 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, May 13, 2020 at 11:48 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n>\n>> My impression about the one row stuff, is that there is too much\n>> special casing around it. We should somehow structure the UniqueKey\n>> data so that one row unique keys come naturally rather than special\n>> cased. E.g every column in such a case is unique in the result so\n>> create as many UniqueKeys are the number of columns\n>\n>\n> This is the beginning state of the UniqueKey, later David suggested\n> this as an optimization[1], I buy-in the idea and later I found it mean\n> more than the original one [2], so I think onerow is needed actually.\n>\n>\nI just found I forget the links yesterday. Here is it.\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvqg%2BmQyxJnCizE%3DqJcBL90L%3DoFXTFyiwWWEaUnzG7Uc5Q%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWrGrs0Vk5OrZmS1gbTA2ijDH18NHKnXZTPZNuupn%2B%2Bing%40mail.gmail.com\n\nOn Wed, May 13, 2020 at 11:48 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\nMy impression about the one row stuff, is that there is too much\nspecial casing around it. We should somehow structure the UniqueKey\ndata so that one row unique keys come naturally rather than special\ncased. E.g every column in such a case is unique in the result so\ncreate as many UniqueKeys are the number of columns This is the beginning state of the UniqueKey,  later David suggestedthis as an optimization[1], I buy-in the idea and later I found it meanmore than the original one [2], so I think onerow is needed actually.  I just found I forget the links yesterday. Here is it.[1] https://www.postgresql.org/message-id/CAApHDvqg%2BmQyxJnCizE%3DqJcBL90L%3DoFXTFyiwWWEaUnzG7Uc5Q%40mail.gmail.com [2]  https://www.postgresql.org/message-id/CAKU4AWrGrs0Vk5OrZmS1gbTA2ijDH18NHKnXZTPZNuupn%2B%2Bing%40mail.gmail.com", "msg_date": "Thu, 14 May 2020 10:46:27 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Tue, Apr 14, 2020 at 09:09:31PM +1200, David Rowley wrote:\n>\n> The infrastructure (knowing the unique properties of a RelOptInfo), as\n> provided by the patch Andy has been working on, which is based on my\n> rough prototype version, I believe should be used for the skip scans\n> patch as well.\n\nHi,\n\nFollowing our agreement about making skip scan patch to use UniqueKeys\nimplementation from this thread I've rebased index skip scan on first\ntwo patches from v8 series [1] (if I understand correctly those two are\nintroducing the concept, and others are just using it). I would like to\nclarify couple of things:\n\n* It seems populate_baserel_uniquekeys, which actually sets uniquekeys,\n is called after create_index_paths, where index skip scan already\n needs to use them. Is it possible to call it earlier?\n\n* Do I understand correctly, that a UniqueKey would be created in a\n simplest case only when an index is unique? This makes it different\n from what was implemented for index skip scan, since there UniqueKeys\n also represents potential to use non-unique index to facilitate search\n for unique values via skipping.\n\n[1]: https://www.postgresql.org/message-id/CAKU4AWpOM3_J-B%3DwQtCeU1TGr89MhpJBBkv2he1tAeQz6i4XNw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 21 May 2020 21:51:23 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 22 May 2020 at 07:49, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> * It seems populate_baserel_uniquekeys, which actually sets uniquekeys,\n> is called after create_index_paths, where index skip scan already\n> needs to use them. Is it possible to call it earlier?\n\nSeems reasonable. I originally put it after check_index_predicates().\nWe need to wait until at least then so we can properly build\nUniqueKeys for partial indexes.\n\n> * Do I understand correctly, that a UniqueKey would be created in a\n> simplest case only when an index is unique? This makes it different\n> from what was implemented for index skip scan, since there UniqueKeys\n> also represents potential to use non-unique index to facilitate search\n> for unique values via skipping.\n\nThe way I picture the two working together is that the core UniqueKey\npatch adds UniqueKeys to RelOptInfos to allow us to have knowledge\nabout what they're unique on based on the base relation's unique\nindexes.\n\nFor Skipscans, that patch must expand on UniqueKeys to teach Paths\nabout them. I imagine we'll set some required UniqueKeys during\nstandard_qp_callback() and then we'll try to create some Skip Scan\npaths (which are marked with UniqueKeys) if the base relation does not\nalready have UniqueKeys that satisfy the required UniqueKeys that were\nset during standard_qp_callback(). In the future, there may be other\nreasons to create Skip Scan paths for certain rels, e.g if they're on\nthe inner side of a SEMI/ANTI join, it might be useful to try those\nwhen planning joins.\n\nDavid\n\n\n", "msg_date": "Fri, 22 May 2020 08:40:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, 14 May 2020 at 14:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Thu, May 14, 2020 at 6:20 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> Having the \"onerow\" flag was not how I intended it to work.\n>>\n> Thanks for the detailed explanation. So I think we do need to handle onerow\n> specially, (It means more things than adding each column as a UniqueKey).\n> but we don't need the onerow flag since we can tell it by ukey->exprs == NIL.\n>\n> During the developer of this feature, I added some Asserts as double checking\n> for onerow and exprs. it helps me to find some special cases. like\n> SELECT FROM multirows union SELECT FROM multirows; where targetlist is NIL.\n> (I find the above case returns onerow as well just now). so onerow flag allows us\n> check this special things with more attention. Even this is not the original intention\n> but looks it is the one purpose now.\n\nBut surely that special case should just go in\npopulate_unionrel_uniquekeys(). If the targetlist is empty, then add a\nUniqueKey with an empty set of exprs.\n\n> However I am feeling that removing onerow flag doesn't require much of code\n> changes. Almost all the special cases which are needed before are still needed\n> after that and all the functions based on that like relation_is_onerow\n> /add_uniquekey_onerow is still valid, we just need change the implementation.\n> so do you think we need to remove onerow flag totally?\n\nWell, at the moment I'm not quite understanding why it's needed. If\nit's not needed then we should remove it. If it turns out there is\nsome valid reason for it, then we should keep it.\n\nDavid\n\n\n", "msg_date": "Fri, 22 May 2020 08:51:45 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, May 22, 2020 at 4:40 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 22 May 2020 at 07:49, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> > * It seems populate_baserel_uniquekeys, which actually sets uniquekeys,\n> > is called after create_index_paths, where index skip scan already\n> > needs to use them. Is it possible to call it earlier?\n>\n> Seems reasonable. I originally put it after check_index_predicates().\n> We need to wait until at least then so we can properly build\n> UniqueKeys for partial indexes.\n>\n>\nLooks a very valid reason, I will add this in the next version.\n\n\n> > * Do I understand correctly, that a UniqueKey would be created in a\n> > simplest case only when an index is unique? This makes it different\n> > from what was implemented for index skip scan, since there UniqueKeys\n> > also represents potential to use non-unique index to facilitate search\n> > for unique values via skipping.\n>\n> The way I picture the two working together is that the core UniqueKey\n> patch adds UniqueKeys to RelOptInfos to allow us to have knowledge\n> about what they're unique on based on the base relation's unique\n> indexes.\n>\nFor Skipscans, that patch must expand on UniqueKeys to teach Paths\n> about them. I imagine we'll set some required UniqueKeys during\n> standard_qp_callback() and then we'll try to create some Skip Scan\n> paths (which are marked with UniqueKeys) if the base relation does not\n> already have UniqueKeys that satisfy the required UniqueKeys that were\n> set during standard_qp_callback(). In the future, there may be other\n> reasons to create Skip Scan paths for certain rels, e.g if they're on\n> the inner side of a SEMI/ANTI join, it might be useful to try those\n> when planning joins.\n>\n>\nYes, In current implementation, we also add UniqueKey during\ncreate_xxx_paths,\nxxx may be grouping/union. after the index skipscan patch, we can do the\nsimilar\nthings in create_indexskip_path.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, May 22, 2020 at 4:40 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 22 May 2020 at 07:49, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> * It seems populate_baserel_uniquekeys, which actually sets uniquekeys,\n>   is called after create_index_paths, where index skip scan already\n>   needs to use them. Is it possible to call it earlier?\n\nSeems reasonable. I originally put it after check_index_predicates().\nWe need to wait until at least then so we can properly build\nUniqueKeys for partial indexes.\nLooks a very valid reason,  I will add this in the next version.  \n> * Do I understand correctly, that a UniqueKey would be created in a\n>   simplest case only when an index is unique? This makes it different\n>   from what was implemented for index skip scan, since there UniqueKeys\n>   also represents potential to use non-unique index to facilitate search\n>   for unique values via skipping.\n\nThe way I picture the two working together is that the core UniqueKey\npatch adds UniqueKeys to RelOptInfos to allow us to have knowledge\nabout what they're unique on based on the base relation's unique\nindexes. \nFor Skipscans, that patch must expand on UniqueKeys to teach Paths\nabout them. I imagine we'll set some required UniqueKeys during\nstandard_qp_callback() and then we'll try to create some Skip Scan\npaths (which are marked with UniqueKeys) if the base relation does not\nalready have UniqueKeys that satisfy the required UniqueKeys that were\nset during standard_qp_callback().  In the future, there may be other\nreasons to create Skip Scan paths for certain rels, e.g if they're on\nthe inner side of a SEMI/ANTI join, it might be useful to try those\nwhen planning joins.Yes,  In current implementation, we also add UniqueKey during create_xxx_paths,xxx may be grouping/union.  after the index skipscan patch, we can do the similarthings in create_indexskip_path. -- Best RegardsAndy Fan", "msg_date": "Fri, 22 May 2020 07:39:38 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> if I understand correctly those two are introducing the concept, and\n\nothers are just using it\n\n\nYou are understand it correctly.\n\n-- \nBest Regards\nAndy Fan\n\nif I understand correctly those two are introducing the concept, and others are just using itYou are understand it correctly.   -- Best RegardsAndy Fan", "msg_date": "Fri, 22 May 2020 07:41:11 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, May 22, 2020 at 4:52 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Thu, 14 May 2020 at 14:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> > On Thu, May 14, 2020 at 6:20 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >> Having the \"onerow\" flag was not how I intended it to work.\n> >>\n> > Thanks for the detailed explanation. So I think we do need to handle\n> onerow\n> > specially, (It means more things than adding each column as a UniqueKey).\n> > but we don't need the onerow flag since we can tell it by ukey->exprs ==\n> NIL.\n> >\n> > During the developer of this feature, I added some Asserts as double\n> checking\n> > for onerow and exprs. it helps me to find some special cases. like\n> > SELECT FROM multirows union SELECT FROM multirows; where targetlist is\n> NIL.\n> > (I find the above case returns onerow as well just now). so onerow flag\n> allows us\n> > check this special things with more attention. Even this is not the\n> original intention\n> > but looks it is the one purpose now.\n>\n> But surely that special case should just go in\n> populate_unionrel_uniquekeys(). If the targetlist is empty, then add a\n> UniqueKey with an empty set of exprs.\n>\n> This is correct on this special case.\n\n> However I am feeling that removing onerow flag doesn't require much of\n> code\n> > changes. Almost all the special cases which are needed before are still\n> needed\n> > after that and all the functions based on that like relation_is_onerow\n> > /add_uniquekey_onerow is still valid, we just need change the\n> implementation.\n> > so do you think we need to remove onerow flag totally?\n>\n> Well, at the moment I'm not quite understanding why it's needed. If\n> it's not needed then we should remove it. If it turns out there is\n> some valid reason for it, then we should keep it.\n>\n\nCurrently I uses it to detect more special case which we can't image at\nfirst, we can\nunderstand it as it used to debug/Assert purpose only. After the mainly\ncode is\nreviewed, that can be removed (based on the change is tiny).\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, May 22, 2020 at 4:52 AM David Rowley <dgrowleyml@gmail.com> wrote:On Thu, 14 May 2020 at 14:39, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n> On Thu, May 14, 2020 at 6:20 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>> Having the \"onerow\" flag was not how I intended it to work.\n>>\n> Thanks for the detailed explanation.  So I think we do need to handle onerow\n> specially, (It means more things than adding each column as a UniqueKey).\n> but we don't need the onerow flag since we can tell it by ukey->exprs == NIL.\n>\n> During the developer of this feature,  I added some Asserts as double checking\n> for onerow and exprs.  it helps me to find some special cases. like\n> SELECT FROM multirows  union SELECT  FROM multirows; where targetlist is NIL.\n> (I find the above case returns onerow as well just now).  so onerow flag allows us\n> check this special things with more attention. Even this is not the original intention\n> but looks it is the one purpose now.\n\nBut surely that special case should just go in\npopulate_unionrel_uniquekeys(). If the targetlist is empty, then add a\nUniqueKey with an empty set of exprs.\nThis is correct on this special case.  \n> However I am feeling that removing onerow flag doesn't require much of code\n> changes. Almost all the special cases which are needed before are still needed\n> after that and all the functions based on that like relation_is_onerow\n> /add_uniquekey_onerow is still valid, we just need change the implementation.\n> so do you think we need to remove onerow flag totally?\n\nWell, at the moment I'm not quite understanding why it's needed. If\nit's not needed then we should remove it. If it turns out there is\nsome valid reason for it, then we should keep it.Currently I uses it to detect more special case which we can't image at first, we can understand it as it used to  debug/Assert purpose only.   After the mainly code is reviewed,  that can be removed (based on the change is tiny). -- Best RegardsAndy Fan", "msg_date": "Fri, 22 May 2020 07:49:07 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n>\n> The way I picture the two working together is that the core UniqueKey\n> patch adds UniqueKeys to RelOptInfos to allow us to have knowledge\n> about what they're unique on based on the base relation's unique\n> indexes.\n\nYep, I'm onboard with the idea.\n\n> For Skipscans, that patch must expand on UniqueKeys to teach Paths\n> about them.\n\nThat's already there.\n\n> I imagine we'll set some required UniqueKeys during\n> standard_qp_callback()\n\nIn standard_qp_callback, because pathkeys are computed at this point I\nguess?\n\n> and then we'll try to create some Skip Scan\n> paths (which are marked with UniqueKeys) if the base relation does not\n> already have UniqueKeys that satisfy the required UniqueKeys that were\n> set during standard_qp_callback().\n\nFor a simple distinct query those UniqueKeys would be set based on\ndistinct clause. If I understand correctly, the very same is implemented\nright now in create_distinct_paths, just after building all index paths,\nso wouldn't it be just a duplication?\n\nIn general UniqueKeys in the skip scan patch were created from\ndistinctClause in build_index_paths (looks similar to what you've\ndescribed) and then based on them created index skip scan paths. So my\nexpectations were that the patch from this thread would work similar.\n\n\n", "msg_date": "Sat, 23 May 2020 18:16:34 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n> > I imagine we'll set some required UniqueKeys during\n> > standard_qp_callback()\n>\n> In standard_qp_callback, because pathkeys are computed at this point I\n> guess?\n\nYes. In particular, we set the pathkeys for DISTINCT clauses there.\n\n> > and then we'll try to create some Skip Scan\n> > paths (which are marked with UniqueKeys) if the base relation does not\n> > already have UniqueKeys that satisfy the required UniqueKeys that were\n> > set during standard_qp_callback().\n>\n> For a simple distinct query those UniqueKeys would be set based on\n> distinct clause. If I understand correctly, the very same is implemented\n> right now in create_distinct_paths, just after building all index paths,\n> so wouldn't it be just a duplication?\n\nI think we need to create the skip scan paths when we create the other\npaths for base relations. We shouldn't be adjusting existing index\npaths during create_distinct_paths(). The last code I saw for the\nskip scans patch did something like if (IsA(path, IndexScanPath)) in\ncreate_distinct_paths(), but that's only ever going to work when the\nquery is to a single relation. You'll never see IndexScanPaths in the\nupper planner's paths when there are joins. You'd see join type paths\ninstead. It is possible to make use of skip scans for DISTINCT when\nthe query has joins. We'd just need to ensure the join does not\nduplicate the unique rows from the skip scanned relation.\n\n> In general UniqueKeys in the skip scan patch were created from\n> distinctClause in build_index_paths (looks similar to what you've\n> described) and then based on them created index skip scan paths. So my\n> expectations were that the patch from this thread would work similar.\n\nThe difference will be that you'd be setting some distinct_uniquekeys\nin standard_qp_callback() to explicitly request that some skip scan\npaths be created for the uniquekeys, whereas the patch here just does\nnot bother doing DISTINCT if the upper relation already has unique\nkeys that state that the DISTINCT is not required. The skip scans\npatch should check if the RelOptInfo for the uniquekeys set in\nstandard_qp_callback() are already mentioned in the RelOptInfo's\nuniquekeys. If they are then there's no point in skip scanning as the\nrel is already unique for the distinct_uniquekeys.\n\nDavid\n\n\n", "msg_date": "Mon, 25 May 2020 06:34:30 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Mon, May 25, 2020 at 06:34:30AM +1200, David Rowley wrote:\n>\n> > For a simple distinct query those UniqueKeys would be set based on\n> > distinct clause. If I understand correctly, the very same is implemented\n> > right now in create_distinct_paths, just after building all index paths,\n> > so wouldn't it be just a duplication?\n>\n> I think we need to create the skip scan paths when we create the other\n> paths for base relations. We shouldn't be adjusting existing index\n> paths during create_distinct_paths(). The last code I saw for the\n> skip scans patch did something like if (IsA(path, IndexScanPath)) in\n> create_distinct_paths()\n\nIt's not the case since the late March.\n\n> > In general UniqueKeys in the skip scan patch were created from\n> > distinctClause in build_index_paths (looks similar to what you've\n> > described) and then based on them created index skip scan paths. So my\n> > expectations were that the patch from this thread would work similar.\n>\n> The difference will be that you'd be setting some distinct_uniquekeys\n> in standard_qp_callback() to explicitly request that some skip scan\n> paths be created for the uniquekeys, whereas the patch here just does\n> not bother doing DISTINCT if the upper relation already has unique\n> keys that state that the DISTINCT is not required. The skip scans\n> patch should check if the RelOptInfo for the uniquekeys set in\n> standard_qp_callback() are already mentioned in the RelOptInfo's\n> uniquekeys. If they are then there's no point in skip scanning as the\n> rel is already unique for the distinct_uniquekeys.\n\nIt sounds like it makes semantics of UniqueKey a bit more confusing,\nisn't it? At the moment it says:\n\n Represents the unique properties held by a RelOptInfo.\n\nWith the proposed changes it would be \"unique properties, that are held\"\nand \"unique properties, that are requested\", which are partially\nduplicated, but stored in some different fields. From the skip scan\npatch perspective it's probably doesn't make any difference, seems like\nthe implementation would be almost the same, just created UniqueKeys\nwould be of different type. But I'm afraid potentiall future users of\nUniqueKeys could be easily confused.\n\n\n", "msg_date": "Mon, 25 May 2020 09:16:26 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Mon, 25 May 2020 at 19:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Mon, May 25, 2020 at 06:34:30AM +1200, David Rowley wrote:\n> > The difference will be that you'd be setting some distinct_uniquekeys\n> > in standard_qp_callback() to explicitly request that some skip scan\n> > paths be created for the uniquekeys, whereas the patch here just does\n> > not bother doing DISTINCT if the upper relation already has unique\n> > keys that state that the DISTINCT is not required. The skip scans\n> > patch should check if the RelOptInfo for the uniquekeys set in\n> > standard_qp_callback() are already mentioned in the RelOptInfo's\n> > uniquekeys. If they are then there's no point in skip scanning as the\n> > rel is already unique for the distinct_uniquekeys.\n>\n> It sounds like it makes semantics of UniqueKey a bit more confusing,\n> isn't it? At the moment it says:\n>\n> Represents the unique properties held by a RelOptInfo.\n>\n> With the proposed changes it would be \"unique properties, that are held\"\n> and \"unique properties, that are requested\", which are partially\n> duplicated, but stored in some different fields. From the skip scan\n> patch perspective it's probably doesn't make any difference, seems like\n> the implementation would be almost the same, just created UniqueKeys\n> would be of different type. But I'm afraid potentiall future users of\n> UniqueKeys could be easily confused.\n\nIf there's some comment that says UniqueKeys are for RelOptInfos, then\nperhaps that comment just needs to be expanded to mention the Path\nuniqueness when we add the uniquekeys field to Path.\n\nI think the main point of basing skip scans on top of this uniquekeys\npatch is to ensure it's the right thing for the job. I don't think\nit's realistic to be maintaining two different sets of infrastructure\nwhich serve a very similar purpose. It's important we make UniqueKeys\ngeneral purpose enough to support future useful forms of optimisation.\nBasing skip scans on it seems like a good exercise towards that. I'm\nnot expecting that we need to make zero changes here to allow it to\nwork well with skip scans.\n\nDavid\n\n\n", "msg_date": "Fri, 5 Jun 2020 12:26:15 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Mon, May 25, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >\n> > > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n> > > I imagine we'll set some required UniqueKeys during\n> > > standard_qp_callback()\n> >\n> > In standard_qp_callback, because pathkeys are computed at this point I\n> > guess?\n>\n> Yes. In particular, we set the pathkeys for DISTINCT clauses there.\n>\n>\nActually I have some issues to understand from here, then try to read index\nskip scan patch to fully understand what is the requirement, but that\ndoesn't\nget it so far[1]. So what is the \"UniqueKeys\" in \"UniqueKeys during\nstandard_qp_callback()\" and what is the \"pathkeys\" in \"pathkeys are computed\nat this point” means? I tried to think it as root->distinct_pathkeys,\nhowever I\ndidn't fully understand where root->distinct_pathkeys is used for as well.\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWq%3DwWkAo-CDOQ5Ea6UwYvZCgb501w6iqU0rtnTT-zg6bQ%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, May 25, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com> wrote:On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>\n> > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n> > I imagine we'll set some required UniqueKeys during\n> > standard_qp_callback()\n>\n> In standard_qp_callback, because pathkeys are computed at this point I\n> guess?\n\nYes. In particular, we set the pathkeys for DISTINCT clauses there.\nActually I have some issues to understand from here,  then try to read indexskip scan patch to fully understand what is the requirement, but that doesn'tget it so far[1].  So what  is the \"UniqueKeys\" in \"UniqueKeys during standard_qp_callback()\" and what is the \"pathkeys\" in \"pathkeys are computedat this point” means?  I tried to think it as root->distinct_pathkeys,  however I didn't fully understand where root->distinct_pathkeys is used for as well.  [1] https://www.postgresql.org/message-id/CAKU4AWq%3DwWkAo-CDOQ5Ea6UwYvZCgb501w6iqU0rtnTT-zg6bQ%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Fri, 5 Jun 2020 10:36:39 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, 5 Jun 2020 at 14:36, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Mon, May 25, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> >\n>> > > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n>> > > I imagine we'll set some required UniqueKeys during\n>> > > standard_qp_callback()\n>> >\n>> > In standard_qp_callback, because pathkeys are computed at this point I\n>> > guess?\n>>\n>> Yes. In particular, we set the pathkeys for DISTINCT clauses there.\n>>\n>\n> Actually I have some issues to understand from here, then try to read index\n> skip scan patch to fully understand what is the requirement, but that doesn't\n> get it so far[1]. So what is the \"UniqueKeys\" in \"UniqueKeys during\n> standard_qp_callback()\" and what is the \"pathkeys\" in \"pathkeys are computed\n> at this point” means? I tried to think it as root->distinct_pathkeys, however I\n> didn't fully understand where root->distinct_pathkeys is used for as well.\n\nIn standard_qp_callback(), what we'll do with uniquekeys is pretty\nmuch what we already do with pathkeys there. Basically pathkeys are\nset there to have the planner attempt to produce a plan that satisfies\nthose pathkeys. Notice at the end of standard_qp_callback() we set\nthe pathkeys according to the first upper planner operation that'll\nneed to make use of those pathkeys. e.g, If there's a GROUP BY and a\nDISTINCT in the query, then use the pathkeys for GROUP BY, since that\nmust occur before DISTINCT. Likely uniquekeys will want to follow the\nsame rules there for the operations that can make use of paths with\nuniquekeys, which in this case, I believe, will be the same as the\nexample I just mentioned for pathkeys, except we'll only be able to\nsupport GROUP BY without any aggregate functions.\n\nDavid\n\n> [1] https://www.postgresql.org/message-id/CAKU4AWq%3DwWkAo-CDOQ5Ea6UwYvZCgb501w6iqU0rtnTT-zg6bQ%40mail.gmail.com\n\n\n", "msg_date": "Fri, 5 Jun 2020 14:57:09 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Fri, Jun 5, 2020 at 10:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Fri, 5 Jun 2020 at 14:36, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> > On Mon, May 25, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com>\n> wrote:\n> >>\n> >> On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com>\n> wrote:\n> >> >\n> >> > > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n> >> > > I imagine we'll set some required UniqueKeys during\n> >> > > standard_qp_callback()\n> >> >\n> >> > In standard_qp_callback, because pathkeys are computed at this point I\n> >> > guess?\n> >>\n> >> Yes. In particular, we set the pathkeys for DISTINCT clauses there.\n> >>\n> >\n> > Actually I have some issues to understand from here, then try to read\n> index\n> > skip scan patch to fully understand what is the requirement, but that\n> doesn't\n> > get it so far[1]. So what is the \"UniqueKeys\" in \"UniqueKeys during\n> > standard_qp_callback()\" and what is the \"pathkeys\" in \"pathkeys are\n> computed\n> > at this point” means? I tried to think it as root->distinct_pathkeys,\n> however I\n> > didn't fully understand where root->distinct_pathkeys is used for as\n> well.\n>\n> In standard_qp_callback(), what we'll do with uniquekeys is pretty\n> much what we already do with pathkeys there. Basically pathkeys are\n> set there to have the planner attempt to produce a plan that satisfies\n> those pathkeys. Notice at the end of standard_qp_callback() we set\n\nthe pathkeys according to the first upper planner operation that'll\n> need to make use of those pathkeys. e.g, If there's a GROUP BY and a\n> DISTINCT in the query, then use the pathkeys for GROUP BY, since that\n> must occur before DISTINCT.\n\n\nThanks for your explanation. Looks I understand now based on your comments.\nTake root->group_pathkeys for example, the similar information also\navailable in\nroot->parse->groupClauses but we do use of root->group_pathkeys with\npathkeys_count_contained_in function in many places, that is mainly because\nthe content between between the 2 is different some times, like the case in\npathkey_is_redundant.\n\nLikely uniquekeys will want to follow the\n> same rules there for the operations that can make use of paths with\n> uniquekeys, which in this case, I believe, will be the same as the\n> example I just mentioned for pathkeys, except we'll only be able to\n> support GROUP BY without any aggregate functions.\n>\n>\nAll the places I want to use UniqueKey so far (like distinct, group by and\nothers)\nhave an input_relation (RelOptInfo), and the UniqueKey information can be\nget\nthere. at the same time, all the pathkey in PlannerInfo is used for Upper\nplanner\nbut UniqueKey may be used in current planner some time, like\nreduce_semianti_joins/\nremove_useless_join, I am not sure if we must maintain uniquekey in\nPlannerInfo.\n\n-- \nBest Regards\nAndy Fan\n\nOn Fri, Jun 5, 2020 at 10:57 AM David Rowley <dgrowleyml@gmail.com> wrote:On Fri, 5 Jun 2020 at 14:36, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> On Mon, May 25, 2020 at 2:34 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> On Sun, 24 May 2020 at 04:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n>> >\n>> > > On Fri, May 22, 2020 at 08:40:17AM +1200, David Rowley wrote:\n>> > > I imagine we'll set some required UniqueKeys during\n>> > > standard_qp_callback()\n>> >\n>> > In standard_qp_callback, because pathkeys are computed at this point I\n>> > guess?\n>>\n>> Yes. In particular, we set the pathkeys for DISTINCT clauses there.\n>>\n>\n> Actually I have some issues to understand from here,  then try to read index\n> skip scan patch to fully understand what is the requirement, but that doesn't\n> get it so far[1].  So what  is the \"UniqueKeys\" in \"UniqueKeys during\n> standard_qp_callback()\" and what is the \"pathkeys\" in \"pathkeys are computed\n> at this point” means?  I tried to think it as root->distinct_pathkeys,  however I\n> didn't fully understand where root->distinct_pathkeys is used for as well.\n\nIn standard_qp_callback(), what we'll do with uniquekeys is pretty\nmuch what we already do with pathkeys there. Basically pathkeys are\nset there to have the planner attempt to produce a plan that satisfies\nthose pathkeys.  Notice at the end of standard_qp_callback() we set\nthe pathkeys according to the first upper planner operation that'll\nneed to make use of those pathkeys.  e.g, If there's a GROUP BY and a\nDISTINCT in the query, then use the pathkeys for GROUP BY, since that\nmust occur before DISTINCT.  Thanks for your explanation.  Looks I understand now based on your comments.Take root->group_pathkeys for example,  the similar information also available in root->parse->groupClauses but we do use of root->group_pathkeys  with pathkeys_count_contained_in function in many places, that is mainly because the content between between the 2 is different some times, like the case inpathkey_is_redundant. Likely uniquekeys will want to follow the\nsame rules there for the operations that can make use of paths with\nuniquekeys, which in this case, I believe, will be the same as the\nexample I just mentioned for pathkeys, except we'll only be able to\nsupport GROUP BY without any aggregate functions.All the places I want to use UniqueKey so far (like distinct, group by and others)have an input_relation (RelOptInfo),  and the UniqueKey information can be getthere.  at the same time,  all the pathkey in PlannerInfo is used for Upper plannerbut UniqueKey may be used in current planner some time, like reduce_semianti_joins/remove_useless_join, I am not sure if we must maintain uniquekey in PlannerInfo. -- Best RegardsAndy Fan", "msg_date": "Fri, 5 Jun 2020 12:20:58 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Fri, Jun 05, 2020 at 12:26:15PM +1200, David Rowley wrote:\n> On Mon, 25 May 2020 at 19:14, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> >\n> > > On Mon, May 25, 2020 at 06:34:30AM +1200, David Rowley wrote:\n> > > The difference will be that you'd be setting some distinct_uniquekeys\n> > > in standard_qp_callback() to explicitly request that some skip scan\n> > > paths be created for the uniquekeys, whereas the patch here just does\n> > > not bother doing DISTINCT if the upper relation already has unique\n> > > keys that state that the DISTINCT is not required. The skip scans\n> > > patch should check if the RelOptInfo for the uniquekeys set in\n> > > standard_qp_callback() are already mentioned in the RelOptInfo's\n> > > uniquekeys. If they are then there's no point in skip scanning as the\n> > > rel is already unique for the distinct_uniquekeys.\n> >\n> > It sounds like it makes semantics of UniqueKey a bit more confusing,\n> > isn't it? At the moment it says:\n> >\n> > Represents the unique properties held by a RelOptInfo.\n> >\n> > With the proposed changes it would be \"unique properties, that are held\"\n> > and \"unique properties, that are requested\", which are partially\n> > duplicated, but stored in some different fields. From the skip scan\n> > patch perspective it's probably doesn't make any difference, seems like\n> > the implementation would be almost the same, just created UniqueKeys\n> > would be of different type. But I'm afraid potentiall future users of\n> > UniqueKeys could be easily confused.\n>\n> If there's some comment that says UniqueKeys are for RelOptInfos, then\n> perhaps that comment just needs to be expanded to mention the Path\n> uniqueness when we add the uniquekeys field to Path.\n\nMy concerns are more about having two different sets of distinct\nuniquekeys:\n\n* one prepared in standard_qp_callback for skip scan (I guess those\n should be added to PlannerInfo?)\n\n* one in create_distinct_paths as per current implementation\n\nwith what seems to be similar content.\n\n> I think the main point of basing skip scans on top of this uniquekeys\n> patch is to ensure it's the right thing for the job. I don't think\n> it's realistic to be maintaining two different sets of infrastructure\n> which serve a very similar purpose. It's important we make UniqueKeys\n> general purpose enough to support future useful forms of optimisation.\n> Basing skip scans on it seems like a good exercise towards that. I'm\n> not expecting that we need to make zero changes here to allow it to\n> work well with skip scans.\n\nSure, no one suggests to have two ways of saying \"this thing is unique\".\nI'm just trying to figure out how to make skip scan and uniquekeys play\ntogether without having rough edges.\n\n\n", "msg_date": "Sat, 6 Jun 2020 11:17:51 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sat, 6 Jun 2020 at 21:15, Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> My concerns are more about having two different sets of distinct\n> uniquekeys:\n>\n> * one prepared in standard_qp_callback for skip scan (I guess those\n> should be added to PlannerInfo?)\n\nYes. Those must be set so that we know if and what we should try to\ncreate Skip Scan Index paths for. Just like we'll create index paths\nfor PlannerInfo.query_pathkeys.\n\n> * one in create_distinct_paths as per current implementation\n>\n> with what seems to be similar content.\n\nI think we need to have UniqueKeys in RelOptInfo so we can describe\nwhat a relation is unique by. There's no point for example in\ncreating skip scan paths for a relation that's already unique on\nwhatever we might try to skip scan on. e.g someone does:\n\nSELECT DISTINCT unique_and_indexed_column FROM tab;\n\nSince there's a unique index on unique_and_indexed_column then we\nneedn't try to create a skipscan path for it.\n\nHowever, the advantages of having UniqueKeys on the RelOptInfo goes a\nlittle deeper than that. We can make use of it anywhere where we\ncurrently do relation_has_unique_index_for() for. Plus we get what\nAndy wants and can skip useless DISTINCT operations when the result is\nalready unique on the distinct clause. Sure we could carry all the\nrelation's unique properties around in Paths, but that's not the right\nplace. It's logically a property of the relation, not the path\nspecifically. RelOptInfo is a good place to store the properties of\nrelations.\n\nThe idea of the meaning of uniquekeys within a path is that the path\nis specifically making those keys unique. We're not duplicating the\nRelOptInfo's uniquekeys there.\n\nIf we have a table like:\n\nCREATE TABLE tab (\n a INT PRIMARY KEY,\n b INT NOT NULL\n);\n\nCREATE INDEX tab_b_idx ON tab (b);\n\nThen I'd expect a query such as: SELECT DISTINCT b FROM tab; to have\nthe uniquekeys for tab's RelOptInfo set to {a}, and the seqscan and\nindex scan paths uniquekey properties set to NULL, but the skipscan\nindex path uniquekeys for tab_b_idx set to {b}. Then when we go\ncreate the distinct paths Andy's work will see that there's no\nRelOptInfo uniquekeys for the distinct clause, but the skip scan work\nwill loop over the unique_pathlist and find that we have a skipscan\npath with the required uniquekeys, a.k.a {b}.\n\nDoes that make sense?\n\nDavid\n\n\n", "msg_date": "Sun, 7 Jun 2020 18:51:22 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Sun, Jun 07, 2020 at 06:51:22PM +1200, David Rowley wrote:\n>\n> > * one in create_distinct_paths as per current implementation\n> >\n> > with what seems to be similar content.\n>\n> I think we need to have UniqueKeys in RelOptInfo so we can describe\n> what a relation is unique by. There's no point for example in\n> creating skip scan paths for a relation that's already unique on\n> whatever we might try to skip scan on. e.g someone does:\n>\n> SELECT DISTINCT unique_and_indexed_column FROM tab;\n>\n> Since there's a unique index on unique_and_indexed_column then we\n> needn't try to create a skipscan path for it.\n>\n> However, the advantages of having UniqueKeys on the RelOptInfo goes a\n> little deeper than that. We can make use of it anywhere where we\n> currently do relation_has_unique_index_for() for. Plus we get what\n> Andy wants and can skip useless DISTINCT operations when the result is\n> already unique on the distinct clause. Sure we could carry all the\n> relation's unique properties around in Paths, but that's not the right\n> place. It's logically a property of the relation, not the path\n> specifically. RelOptInfo is a good place to store the properties of\n> relations.\n>\n> The idea of the meaning of uniquekeys within a path is that the path\n> is specifically making those keys unique. We're not duplicating the\n> RelOptInfo's uniquekeys there.\n>\n> If we have a table like:\n>\n> CREATE TABLE tab (\n> a INT PRIMARY KEY,\n> b INT NOT NULL\n> );\n>\n> CREATE INDEX tab_b_idx ON tab (b);\n>\n> Then I'd expect a query such as: SELECT DISTINCT b FROM tab; to have\n> the uniquekeys for tab's RelOptInfo set to {a}, and the seqscan and\n> index scan paths uniquekey properties set to NULL, but the skipscan\n> index path uniquekeys for tab_b_idx set to {b}. Then when we go\n> create the distinct paths Andy's work will see that there's no\n> RelOptInfo uniquekeys for the distinct clause, but the skip scan work\n> will loop over the unique_pathlist and find that we have a skipscan\n> path with the required uniquekeys, a.k.a {b}.\n>\n> Does that make sense?\n\nYes, from this point of view it makes sense. I've already posted the\nfirst version of index skip scan based on this implementation [1]. There\ncould be rought edges, but overall I hope we're on the same page.\n\n[1]: https://www.postgresql.org/message-id/flat/20200609102247.jdlatmfyeecg52fi%40localhost\n\n\n", "msg_date": "Tue, 9 Jun 2020 12:29:13 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "I just did another self-review about this patch and took some suggestions\nbased\non the discussion above. The attached is the v9 version. When you check\nthe\nuniquekey patch, README.uniquekey should be a good place to start with.\n\nMain changes in v9 includes:\n\n1. called populate_baserel_uniquekeys after check_index_predicates.\n2. removed the UniqueKey->onerow flag since we can tell it by exprs == NIL.\n3. expression index code improvement.\n4. code & comments refactoring.\n\nAs for the Index Skip Scan, I still have not merged the changes in the\nIndex\nSkip Scan patch[1]. We may need some addition for that, but probably not\nneed to modify the existing code. After we can finalize it, we can add it\nin\nthat patch. I will keep a close eye on it as well.\n\n[1]:\nhttps://www.postgresql.org/message-id/flat/20200609102247.jdlatmfyeecg52fi%40localhost\n-- \nBest Regards\nAndy Fan", "msg_date": "Mon, 29 Jun 2020 17:59:21 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Fixed a test case in v10.\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Sun, 19 Jul 2020 11:03:26 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Andy,\r\n\r\nA small thing I found:\r\n\r\n+static List *\r\n+get_exprs_from_uniqueindex(IndexOptInfo *unique_index,\r\n+ List *const_exprs,\r\n+ List *const_expr_opfamilies,\r\n+ Bitmapset *used_varattrs,\r\n+ bool *useful,\r\n+ bool *multi_nullvals)\r\n…\r\n+ indexpr_item = list_head(unique_index->indexprs);\r\n+ for(c = 0; c < unique_index->ncolumns; c++)\r\n+ {\r\n\r\nI believe the for loop must be over unique_index->nkeycolumns, rather than columns. It shouldn’t include the extra non-key columns. This can currently lead to invalid memory accesses as well a few lines later when it does an array access of unique_index->opfamily[c] – this array only has nkeycolumns entries.\r\n\r\n-Floris\r\n\r\n\r\nFrom: Andy Fan <zhihui.fan1213@gmail.com>\r\nSent: Sunday 19 July 2020 5:03 AM\r\nTo: Dmitry Dolgov <9erthalion6@gmail.com>\r\nCc: David Rowley <dgrowleyml@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Tom Lane <tgl@sss.pgh.pa.us>; Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; rushabh.lathia@gmail.com\r\nSubject: Re: [PATCH] Keeps tracking the uniqueness with UniqueKey [External]\r\n\r\nFixed a test case in v10.\r\n\r\n--\r\nBest Regards\r\nAndy Fan\r\n\n\n\n\n\n\n\n\n\nHi Andy,\n \nA small thing I found:\n \n+static List *\n+get_exprs_from_uniqueindex(IndexOptInfo *unique_index,\n+                                                                                                List *const_exprs,\n+                                                                                                List *const_expr_opfamilies,\n+                                                                                                Bitmapset *used_varattrs,\n+                                                                                                bool *useful,\n+                                                                                                bool *multi_nullvals)\n…\n+             indexpr_item = list_head(unique_index->indexprs);\n+             for(c = 0; c < unique_index->ncolumns; c++)\n+             {\n \nI believe the for loop must be over unique_index->nkeycolumns, rather than columns. It shouldn’t include the extra non-key columns. This can currently lead to invalid memory accesses as well a few lines later when it does an array access\r\n of unique_index->opfamily[c] – this array only has nkeycolumns entries.\n \n-Floris\n \n \n\n\n\nFrom: Andy Fan <zhihui.fan1213@gmail.com> \nSent: Sunday 19 July 2020 5:03 AM\nTo: Dmitry Dolgov <9erthalion6@gmail.com>\nCc: David Rowley <dgrowleyml@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Tom Lane <tgl@sss.pgh.pa.us>; Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; rushabh.lathia@gmail.com\nSubject: Re: [PATCH] Keeps tracking the uniqueness with UniqueKey [External]\n\n\n \n\n\nFixed a test case in v10. \n\n\n \n\n-- \n\n\nBest Regards\n\nAndy Fan", "msg_date": "Wed, 22 Jul 2020 19:22:09 +0000", "msg_from": "Floris Van Nee <florisvannee@Optiver.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Floris:\n\nOn Thu, Jul 23, 2020 at 3:22 AM Floris Van Nee <florisvannee@optiver.com>\nwrote:\n\n> Hi Andy,\n>\n>\n>\n> A small thing I found:\n>\n>\n>\n> +static List *\n>\n> +get_exprs_from_uniqueindex(IndexOptInfo *unique_index,\n>\n> +\n> List *const_exprs,\n>\n> +\n> List *const_expr_opfamilies,\n>\n> +\n> Bitmapset *used_varattrs,\n>\n> +\n> bool *useful,\n>\n> +\n> bool *multi_nullvals)\n>\n> …\n>\n> + indexpr_item = list_head(unique_index->indexprs);\n>\n> + for(c = 0; c < unique_index->ncolumns; c++)\n>\n> + {\n>\n>\n>\n> I believe the for loop must be over unique_index->nkeycolumns, rather than\n> columns. It shouldn’t include the extra non-key columns. This can currently\n> lead to invalid memory accesses as well a few lines later when it does an\n> array access of unique_index->opfamily[c] – this array only has nkeycolumns\n> entries.\n>\n\nYou are correct, I would include this in the next version patch, Thank you\nfor this checking!\n\n--\nAndy Fan\nBest Regards\n\n>\n>\n>\n>\n> *From:* Andy Fan <zhihui.fan1213@gmail.com>\n> *Sent:* Sunday 19 July 2020 5:03 AM\n> *To:* Dmitry Dolgov <9erthalion6@gmail.com>\n> *Cc:* David Rowley <dgrowleyml@gmail.com>; PostgreSQL Hackers <\n> pgsql-hackers@lists.postgresql.org>; Tom Lane <tgl@sss.pgh.pa.us>;\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; rushabh.lathia@gmail.com\n> *Subject:* Re: [PATCH] Keeps tracking the uniqueness with UniqueKey\n> [External]\n>\n>\n>\n> Fixed a test case in v10.\n>\n>\n>\n> --\n>\n> Best Regards\n>\n> Andy Fan\n>\n\n\n-- \nBest Regards\nAndy Fan\n\nHi Floris:On Thu, Jul 23, 2020 at 3:22 AM Floris Van Nee <florisvannee@optiver.com> wrote:\n\n\nHi Andy,\n \nA small thing I found:\n \n+static List *\n+get_exprs_from_uniqueindex(IndexOptInfo *unique_index,\n+                                                                                                List *const_exprs,\n+                                                                                                List *const_expr_opfamilies,\n+                                                                                                Bitmapset *used_varattrs,\n+                                                                                                bool *useful,\n+                                                                                                bool *multi_nullvals)\n…\n+             indexpr_item = list_head(unique_index->indexprs);\n+             for(c = 0; c < unique_index->ncolumns; c++)\n+             {\n \nI believe the for loop must be over unique_index->nkeycolumns, rather than columns. It shouldn’t include the extra non-key columns. This can currently lead to invalid memory accesses as well a few lines later when it does an array access\r\n of unique_index->opfamily[c] – this array only has nkeycolumns entries.You are correct, I would include this in the next version patch, Thank youfor this checking!--Andy FanBest Regards\n \n \n\n\n\nFrom: Andy Fan <zhihui.fan1213@gmail.com> \nSent: Sunday 19 July 2020 5:03 AM\nTo: Dmitry Dolgov <9erthalion6@gmail.com>\nCc: David Rowley <dgrowleyml@gmail.com>; PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>; Tom Lane <tgl@sss.pgh.pa.us>; Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>; rushabh.lathia@gmail.com\nSubject: Re: [PATCH] Keeps tracking the uniqueness with UniqueKey [External]\n\n\n \n\n\nFixed a test case in v10. \n\n\n \n\n-- \n\n\nBest Regards\n\nAndy Fan\n\n\n\n\n\n\n\n-- Best RegardsAndy Fan", "msg_date": "Tue, 4 Aug 2020 06:59:50 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Tue, Aug 04, 2020 at 06:59:50AM +0800, Andy Fan wrote:\n> You are correct, I would include this in the next version patch, Thank you\n> for this checking!\n\nRegression tests are failing with this patch set applied. The CF bot\nsays so, and I can reproduce that locally as well. Could you look at\nthat please? I have switched the patch to \"waiting on author\".\n--\nMichael", "msg_date": "Mon, 7 Sep 2020 16:22:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Mon, Sep 7, 2020 at 3:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Aug 04, 2020 at 06:59:50AM +0800, Andy Fan wrote:\n> > You are correct, I would include this in the next version patch, Thank\n> you\n> > for this checking!\n>\n> Regression tests are failing with this patch set applied. The CF bot\n> says so, and I can reproduce that locally as well. Could you look at\n> that please? I have switched the patch to \"waiting on author\".\n> --\n> Michael\n>\n\nThank you Michael for checking it, I can reproduce the same locally after\nrebasing to the latest master. The attached v11 has fixed it and includes\nthe fix Floris found.\n\nThe status of this patch is we are still in discussion about which data\ntype should\nUniqueKey->expr use. Both David [1] and I [2] shared some thinking about\nEquivalenceClasses, but neither of us have decided on it. So I still didn't\nchange\nanything about that now. I can change it once we have decided on it.\n\n[1]\nhttps://www.postgresql.org/message-id/CAApHDvoDMyw%3DhTuW-258yqNK4bhW6CpguJU_GZBh4x%2Brnoem3w%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com\n\n-- \nBest Regards\nAndy Fan", "msg_date": "Wed, 9 Sep 2020 07:51:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Wed, Sep 09, 2020 at 07:51:12AM +0800, Andy Fan wrote:\n>\n> Thank you Michael for checking it, I can reproduce the same locally after\n> rebasing to the latest master. The attached v11 has fixed it and includes\n> the fix Floris found.\n>\n> The status of this patch is we are still in discussion about which data\n> type should\n> UniqueKey->expr use. Both David [1] and I [2] shared some thinking about\n> EquivalenceClasses, but neither of us have decided on it. So I still didn't\n> change\n> anything about that now. I can change it once we have decided on it.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvoDMyw%3DhTuW-258yqNK4bhW6CpguJU_GZBh4x%2Brnoem3w%40mail.gmail.com\n>\n> [2]\n> https://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com\n\nHi,\n\nIn the Index Skip Scan thread Peter mentioned couple of issues that I\nbelieve need to be addressed here. In fact one about valgrind errors was\nalready fixed as far as I see (nkeycolumns instead of ncolumns), another\none was:\n\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\nIn function ‘populate_baserel_uniquekeys’:\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\nwarning: ‘expr’ may be used uninitialized in this function\n[-Wmaybe-uninitialized]\n 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOther than that I wanted to ask what are the plans to proceed with this\npatch? It's been a while since the question was raised in which format\nto keep unique key expressions, and as far as I can see no detailed\nsuggestions or patch changes were proposed as a follow up. Obviously I\nwould love to see the first two preparation patches committed to avoid\ndependencies between patches, and want to suggest an incremental\napproach with simple format for start (what we have right now) with the\nidea how to extend it in the future to cover more cases.\n\n\n", "msg_date": "Wed, 7 Oct 2020 15:55:32 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Wed, Oct 7, 2020 at 9:55 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Wed, Sep 09, 2020 at 07:51:12AM +0800, Andy Fan wrote:\n> >\n> > Thank you Michael for checking it, I can reproduce the same locally after\n> > rebasing to the latest master. The attached v11 has fixed it and includes\n> > the fix Floris found.\n> >\n> > The status of this patch is we are still in discussion about which data\n> > type should\n> > UniqueKey->expr use. Both David [1] and I [2] shared some thinking about\n> > EquivalenceClasses, but neither of us have decided on it. So I still\n> didn't\n> > change\n> > anything about that now. I can change it once we have decided on it.\n> >\n> > [1]\n> >\n> https://www.postgresql.org/message-id/CAApHDvoDMyw%3DhTuW-258yqNK4bhW6CpguJU_GZBh4x%2Brnoem3w%40mail.gmail.com\n> >\n> > [2]\n> >\n> https://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com\n>\n> Hi,\n>\n> In the Index Skip Scan thread Peter mentioned couple of issues that I\n> believe need to be addressed here. In fact one about valgrind errors was\n> already fixed as far as I see (nkeycolumns instead of ncolumns), another\n> one was:\n>\n>\n> /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\n> In function ‘populate_baserel_uniquekeys’:\n>\n> /code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\n> warning: ‘expr’ may be used uninitialized in this function\n> [-Wmaybe-uninitialized]\n> 797 | else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n> | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\n>\nI can fix this warning in the next version, thanks for reporting it. It\ncan be\nfixed like below or just adjust the if-elseif-else pattern.\n\n--- a/src/backend/optimizer/path/uniquekeys.c\n+++ b/src/backend/optimizer/path/uniquekeys.c\n@@ -760,6 +760,7 @@ get_exprs_from_uniqueindex(IndexOptInfo *unique_index,\n {\n /* Index on system column is not supported */\n Assert(false);\n+ expr = NULL; /* make compiler happy */\n }\n\n\n\n> Other than that I wanted to ask what are the plans to proceed with this\n> patch? It's been a while since the question was raised in which format\n> to keep unique key expressions, and as far as I can see no detailed\n> suggestions or patch changes were proposed as a follow up. Obviously I\n> would love to see the first two preparation patches committed to avoid\n> dependencies between patches, and want to suggest an incremental\n> approach with simple format for start (what we have right now) with the\n> idea how to extend it in the future to cover more cases.\n>\n\nI think the hardest part of this series is commit 2, it probably needs\nlots of\ndedicated time to review which would be the hardest part for the reviewers.\nI don't have a good suggestion, however.\n\n-- \nBest Regards\nAndy Fan\n\nOn Wed, Oct 7, 2020 at 9:55 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Wed, Sep 09, 2020 at 07:51:12AM +0800, Andy Fan wrote:\n>\n> Thank you Michael for checking it, I can reproduce the same locally after\n> rebasing to the latest master. The attached v11 has fixed it and includes\n> the fix Floris found.\n>\n> The status of this patch is we are still in discussion about which data\n> type should\n> UniqueKey->expr use.  Both David [1] and I [2] shared some thinking about\n> EquivalenceClasses, but neither of us have decided on it. So I still didn't\n> change\n> anything about that  now.   I can change it once we have decided on it.\n>\n> [1]\n> https://www.postgresql.org/message-id/CAApHDvoDMyw%3DhTuW-258yqNK4bhW6CpguJU_GZBh4x%2Brnoem3w%40mail.gmail.com\n>\n> [2]\n> https://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com\n\nHi,\n\nIn the Index Skip Scan thread Peter mentioned couple of issues that I\nbelieve need to be addressed here. In fact one about valgrind errors was\nalready fixed as far as I see (nkeycolumns instead of ncolumns), another\none was:\n\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:\nIn function ‘populate_baserel_uniquekeys’:\n/code/postgresql/patch/build/../source/src/backend/optimizer/path/uniquekeys.c:797:13:\nwarning: ‘expr’ may be used uninitialized in this function\n[-Wmaybe-uninitialized]\n  797 |   else if (!list_member(unique_index->rel->reltarget->exprs, expr))\n      |             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nI can fix this warning in the next version,  thanks for reporting it.  It can befixed like below or just adjust the if-elseif-else pattern.--- a/src/backend/optimizer/path/uniquekeys.c+++ b/src/backend/optimizer/path/uniquekeys.c@@ -760,6 +760,7 @@ get_exprs_from_uniqueindex(IndexOptInfo *unique_index,                {                        /* Index on system column is not supported */                        Assert(false);+                       expr = NULL; /* make compiler happy */                } \nOther than that I wanted to ask what are the plans to proceed with this\npatch? It's been a while since the question was raised in which format\nto keep unique key expressions, and as far as I can see no detailed\nsuggestions or patch changes were proposed as a follow up. Obviously I\nwould love to see the first two preparation patches committed to avoid\ndependencies between patches, and want to suggest an incremental\napproach with simple format for start (what we have right now) with the\nidea how to extend it in the future to cover more cases.\nI think the hardest part of this series is commit 2,  it probably needs lots ofdedicated time to review which would be the hardest part for the reviewers.I don't have a good suggestion, however. -- Best RegardsAndy Fan", "msg_date": "Thu, 8 Oct 2020 09:34:51 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi\r\n\r\nI have a look over this patch and find some typos in 0002.\r\n\r\n1.Some typos about unique:\r\nThere are some spelling mistakes about \"unique\" in code comments and README.\r\nSuch as: \"+However we define the UnqiueKey as below.\"\r\n\r\n2.function name about initililze_uniquecontext_for_joinrel:\r\nMay be it should be initialize_ uniquecontext_for_joinrel.\r\n\r\n3.some typos in comment:\r\n+\t\t\t * baserelation's basicrestrictinfo. so it must be in ON clauses.\r\n\r\nI think it shoule be \" basicrestrictinfo \" => \"baserestrictinfo\".\r\n\r\n\r\nBesides, I think list_copy can be used to simplify the following code.\r\n(But It seems the type of expr is still in discussion, so this may has no impact )\r\n+\tList\t*exprs = NIL;\r\n...\r\n+\tforeach(lc, unionrel->reltarget->exprs)\r\n+\t{\r\n+\t\texprs = lappend(exprs, lfirst(lc));\r\n+\t}\r\n\r\nBest regards,\r\n\n\n", "msg_date": "Thu, 8 Oct 2020 04:12:37 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "> On Thu, Oct 08, 2020 at 09:34:51AM +0800, Andy Fan wrote:\n>\n> > Other than that I wanted to ask what are the plans to proceed with this\n> > patch? It's been a while since the question was raised in which format\n> > to keep unique key expressions, and as far as I can see no detailed\n> > suggestions or patch changes were proposed as a follow up. Obviously I\n> > would love to see the first two preparation patches committed to avoid\n> > dependencies between patches, and want to suggest an incremental\n> > approach with simple format for start (what we have right now) with the\n> > idea how to extend it in the future to cover more cases.\n> >\n>\n> I think the hardest part of this series is commit 2, it probably needs\n> lots of\n> dedicated time to review which would be the hardest part for the reviewers.\n> I don't have a good suggestion, however.\n\nSure, and I would review the patch as well. But as far as I understand\nthe main issue is \"how to store uniquekey expressions\", and as long as\nit is not decided, no additional review will move the patch forward I\nguess.\n\n\n", "msg_date": "Thu, 8 Oct 2020 12:39:38 +0200", "msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, Oct 8, 2020 at 6:39 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n\n> > On Thu, Oct 08, 2020 at 09:34:51AM +0800, Andy Fan wrote:\n> >\n> > > Other than that I wanted to ask what are the plans to proceed with this\n> > > patch? It's been a while since the question was raised in which format\n> > > to keep unique key expressions, and as far as I can see no detailed\n> > > suggestions or patch changes were proposed as a follow up. Obviously I\n> > > would love to see the first two preparation patches committed to avoid\n> > > dependencies between patches, and want to suggest an incremental\n> > > approach with simple format for start (what we have right now) with the\n> > > idea how to extend it in the future to cover more cases.\n> > >\n> >\n> > I think the hardest part of this series is commit 2, it probably needs\n> > lots of\n> > dedicated time to review which would be the hardest part for the\n> reviewers.\n> > I don't have a good suggestion, however.\n>\n> Sure, and I would review the patch as well.\n\n\nThank you very much!\n\n\n> But as far as I understand\n> the main issue is \"how to store uniquekey expressions\", and as long as\n> it is not decided, no additional review will move the patch forward I\n> guess.\n>\n\nI don't think so:) The patch may have other issues as well. For example,\nlogic error or duplicated code or cases needing improvement and so on.\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Oct 8, 2020 at 6:39 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:> On Thu, Oct 08, 2020 at 09:34:51AM +0800, Andy Fan wrote:\n>\n> > Other than that I wanted to ask what are the plans to proceed with this\n> > patch? It's been a while since the question was raised in which format\n> > to keep unique key expressions, and as far as I can see no detailed\n> > suggestions or patch changes were proposed as a follow up. Obviously I\n> > would love to see the first two preparation patches committed to avoid\n> > dependencies between patches, and want to suggest an incremental\n> > approach with simple format for start (what we have right now) with the\n> > idea how to extend it in the future to cover more cases.\n> >\n>\n> I think the hardest part of this series is commit 2,  it probably needs\n> lots of\n> dedicated time to review which would be the hardest part for the reviewers.\n> I don't have a good suggestion, however.\n\nSure, and I would review the patch as well. Thank you very much! But as far as I understand\nthe main issue is \"how to store uniquekey expressions\", and as long as\nit is not decided, no additional review will move the patch forward I\nguess.I don't think so:)  The patch may have other issues as well.  For example, logic error or duplicated code or cases needing improvement and so on. -- Best RegardsAndy Fan", "msg_date": "Mon, 12 Oct 2020 10:32:39 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Thu, Oct 8, 2020 at 12:12 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com>\nwrote:\n\n> Hi\n>\n> I have a look over this patch and find some typos in 0002.\n>\n> 1.Some typos about unique:\n> There are some spelling mistakes about \"unique\" in code comments and\n> README.\n> Such as: \"+However we define the UnqiueKey as below.\"\n>\n> 2.function name about initililze_uniquecontext_for_joinrel:\n> May be it should be initialize_ uniquecontext_for_joinrel.\n>\n> 3.some typos in comment:\n> + * baserelation's basicrestrictinfo. so it must be\n> in ON clauses.\n>\n> I think it shoule be \" basicrestrictinfo \" => \"baserestrictinfo\".\n>\n>\n> Besides, I think list_copy can be used to simplify the following code.\n> (But It seems the type of expr is still in discussion, so this may has no\n> impact )\n> + List *exprs = NIL;\n> ...\n> + foreach(lc, unionrel->reltarget->exprs)\n> + {\n> + exprs = lappend(exprs, lfirst(lc));\n> + }\n>\n> Best regards,\n>\n>\n>\nThank you zhijie, I will fix them in next version.\n\n-- \nBest Regards\nAndy Fan\n\nOn Thu, Oct 8, 2020 at 12:12 PM Hou, Zhijie <houzj.fnst@cn.fujitsu.com> wrote:Hi\n\nI have a look over this patch and find some typos in 0002.\n\n1.Some typos about unique:\nThere are some spelling mistakes about \"unique\" in code comments and README.\nSuch as: \"+However we define the UnqiueKey as below.\"\n\n2.function name about initililze_uniquecontext_for_joinrel:\nMay be it should be initialize_ uniquecontext_for_joinrel.\n\n3.some typos in comment:\n+                        * baserelation's basicrestrictinfo. so it must be in ON clauses.\n\nI think it shoule be \" basicrestrictinfo \" => \"baserestrictinfo\".\n\n\nBesides, I think list_copy can be used to simplify the following code.\n(But It seems the type of expr is still in discussion, so this may has no impact )\n+       List    *exprs = NIL;\n...\n+       foreach(lc, unionrel->reltarget->exprs)\n+       {\n+               exprs = lappend(exprs, lfirst(lc));\n+       }\n\nBest regards,\n\n\nThank you zhijie,  I will fix them in next version. -- Best RegardsAndy Fan", "msg_date": "Mon, 12 Oct 2020 10:33:17 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "This patch has stopped moving for a while, any suggestion about\nhow to move on is appreciated.\n\n-- \nBest Regards\nAndy Fan\n\nThis patch has stopped moving for a while,  any suggestion abouthow to move on is appreciated. -- Best RegardsAndy Fan", "msg_date": "Thu, 26 Nov 2020 22:58:12 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On 26/11/2020 16:58, Andy Fan wrote:\n> This patch has stopped moving for a while,  any suggestion about\n> how to move on is appreciated.\n\nThe question on whether UniqueKey.exprs should be a list of \nEquivalenceClasses or PathKeys is unresolved. I don't have an opinion on \nthat, but I'd suggest that you pick one or the other and just go with \nit. If it turns out to be a bad choice, then we'll change it.\n\nQuickly looking at the patches, there's one thing I think no one's \nmentioned yet, but looks really ugly to me:\n\n> +\t\t/* Make sure the path->parent point to current joinrel, can't update it in-place. */\n> +\t\tforeach(lc, outer_rel->pathlist)\n> +\t\t{\n> +\t\t\tSize sz = size_of_path(lfirst(lc));\n> +\t\t\tPath *path = palloc(sz);\n> +\t\t\tmemcpy(path, lfirst(lc), sz);\n> +\t\t\tpath->parent = joinrel;\n> +\t\t\tadd_path(joinrel, path);\n> +\t\t}\n\nCopying a Path and modifying it like that is not good, there's got to be \na better way to do this. Perhaps wrap the original Paths in \nProjectionPaths, where the ProjectionPath's parent is the joinrel and \ndummypp=true.\n\n- Heikki\n\n\n", "msg_date": "Mon, 30 Nov 2020 12:04:34 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi,\n\nOn 11/30/20 5:04 AM, Heikki Linnakangas wrote:\n> On 26/11/2020 16:58, Andy Fan wrote:\n>> This patch has stopped moving for a while,  any suggestion about\n>> how to move on is appreciated.\n>\n> The question on whether UniqueKey.exprs should be a list of \n> EquivalenceClasses or PathKeys is unresolved. I don't have an opinion \n> on that, but I'd suggest that you pick one or the other and just go \n> with it. If it turns out to be a bad choice, then we'll change it.\n\nIn this case I think it is matter of deciding if we are going to use \nEquivalenceClasses or Exprs before going further; there has been work \nongoing in this area for a while, so having a clear direction from a \ncommitter would be greatly appreciated.\n\nDeciding would also help potential reviewers to give more feedback on \nthe features implemented on top of the base.\n\nShould there be a new thread with the minimum requirements in order to \nget closer ?\n\nBest regards,\n  Jesper\n\n\n\n", "msg_date": "Mon, 30 Nov 2020 09:30:19 -0500", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On 30/11/2020 16:30, Jesper Pedersen wrote:\n> On 11/30/20 5:04 AM, Heikki Linnakangas wrote:\n>> On 26/11/2020 16:58, Andy Fan wrote:\n>>> This patch has stopped moving for a while,  any suggestion about\n>>> how to move on is appreciated.\n>>\n>> The question on whether UniqueKey.exprs should be a list of\n>> EquivalenceClasses or PathKeys is unresolved. I don't have an opinion\n>> on that, but I'd suggest that you pick one or the other and just go\n>> with it. If it turns out to be a bad choice, then we'll change it.\n> \n> In this case I think it is matter of deciding if we are going to use\n> EquivalenceClasses or Exprs before going further; there has been work\n> ongoing in this area for a while, so having a clear direction from a\n> committer would be greatly appreciated.\n\nPlain Exprs are not good enough, because you need to know which operator \nthe expression is unique on. Usually, it's the default = operator in the \ndefault btree opclass for the datatype, but it could be something else, too.\n\nThere's some precedence for PathKeys, as we generate PathKeys to \nrepresent the DISTINCT column in PlannerInfo->distinct_pathkeys. On the \nother hand, I've always found it confusing that we use PathKeys to \nrepresent DISTINCT and GROUP BY, which are not actually sort orderings. \nPerhaps it would make sense to store EquivalenceClass+opfamily in \nUniqueKey, and also replace distinct_pathkeys and group_pathkeys with \nUniqueKeys.\n\nThat's just my 2 cents though, others more familiar with this planner \ncode might have other opinions...\n\n- Heikki\n\n\n", "msg_date": "Mon, 30 Nov 2020 17:20:18 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi \r\n\r\nI look into the patch again and have some comments.\r\n\r\n1.\r\n+\tSize oid_cmp_len = sizeof(Oid) * ind1->ncolumns;\r\n+\r\n+\treturn ind1->ncolumns == ind2->ncolumns &&\r\n+\t\tind1->unique == ind2->unique &&\r\n+\t\tmemcmp(ind1->indexkeys, ind2->indexkeys, sizeof(int) * ind1->ncolumns) == 0 &&\r\n+\t\tmemcmp(ind1->opfamily, ind2->opfamily, oid_cmp_len) == 0 &&\r\n+\t\tmemcmp(ind1->opcintype, ind2->opcintype, oid_cmp_len) == 0 &&\r\n+\t\tmemcmp(ind1->sortopfamily, ind2->sortopfamily, oid_cmp_len) == 0 &&\r\n+\t\tequal(get_tlist_exprs(ind1->indextlist, true),\r\n+\t\t\t get_tlist_exprs(ind2->indextlist, true));\r\n\r\nThe length of sortopfamily,opfamily and opcintype seems ->nkeycolumns not ->ncolumns.\r\nI checked function get_relation_info where init the IndexOptInfo.\r\n(If there are more places where can change the length, please correct me)\r\n\r\n\r\n2.\r\n\r\n+\tCOPY_SCALAR_FIELD(ncolumns);\r\n+\tCOPY_SCALAR_FIELD(nkeycolumns);\r\n+\tCOPY_SCALAR_FIELD(unique);\r\n+\tCOPY_SCALAR_FIELD(immediate);\r\n+\t/* We just need to know if it is NIL or not */\r\n+\tCOPY_SCALAR_FIELD(indpred);\r\n+\tCOPY_SCALAR_FIELD(predOK);\r\n+\tCOPY_POINTER_FIELD(indexkeys, from->ncolumns * sizeof(int));\r\n+\tCOPY_POINTER_FIELD(indexcollations, from->ncolumns * sizeof(Oid));\r\n+\tCOPY_POINTER_FIELD(opfamily, from->ncolumns * sizeof(Oid));\r\n+\tCOPY_POINTER_FIELD(opcintype, from->ncolumns * sizeof(Oid));\r\n+\tCOPY_POINTER_FIELD(sortopfamily, from->ncolumns * sizeof(Oid));\r\n+\tCOPY_NODE_FIELD(indextlist);\r\n\r\nThe same as 1.\r\nShould use nkeycolumns if I am right.\r\n\r\n\r\n3.\r\n+\tforeach(lc, newnode->indextlist)\r\n+\t{\r\n+\t\tTargetEntry *tle = lfirst_node(TargetEntry, lc);\r\n+\t\t/* Index on expression is ignored */\r\n+\t\tAssert(IsA(tle->expr, Var));\r\n+\t\ttle->expr = (Expr *) find_parent_var(appinfo, (Var *) tle->expr);\r\n+\t\tnewnode->indexkeys[idx] = castNode(Var, tle->expr)->varattno;\r\n+\t\tidx++;\r\n+\t}\r\n\r\nThe count variable 'idx' can be replaces by foreach_current_index().\r\n\r\n\r\nBest regards,\r\nhouzj\r\n\r\n\n\n", "msg_date": "Tue, 1 Dec 2020 05:45:35 +0000", "msg_from": "\"Hou, Zhijie\" <houzj.fnst@cn.fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Thank you Heikki for your attention.\n\nOn Mon, Nov 30, 2020 at 11:20 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 30/11/2020 16:30, Jesper Pedersen wrote:\n> > On 11/30/20 5:04 AM, Heikki Linnakangas wrote:\n> >> On 26/11/2020 16:58, Andy Fan wrote:\n> >>> This patch has stopped moving for a while, any suggestion about\n> >>> how to move on is appreciated.\n> >>\n> >> The question on whether UniqueKey.exprs should be a list of\n> >> EquivalenceClasses or PathKeys is unresolved. I don't have an opinion\n> >> on that, but I'd suggest that you pick one or the other and just go\n> >> with it. If it turns out to be a bad choice, then we'll change it.\n> >\n> > In this case I think it is matter of deciding if we are going to use\n> > EquivalenceClasses or Exprs before going further; there has been work\n> > ongoing in this area for a while, so having a clear direction from a\n> > committer would be greatly appreciated.\n>\n> Plain Exprs are not good enough, because you need to know which operator\n> the expression is unique on. Usually, it's the default = operator in the\n> default btree opclass for the datatype, but it could be something else,\n> too.\n\n\nActually I can't understand this, could you explain more? Based on my\ncurrent\nknowledge, when we run \"SELECT DISTINCT a FROM t\", we never care about\nwhich operator to use for the unique.\n\n\n\nThere's some precedence for PathKeys, as we generate PathKeys to\n> represent the DISTINCT column in PlannerInfo->distinct_pathkeys. On the\n> other hand, I've always found it confusing that we use PathKeys to\n> represent DISTINCT and GROUP BY, which are not actually sort orderings.\n>\n\nOK, I have the same confusion now:)\n\nPerhaps it would make sense to store EquivalenceClass+opfamily in\n> UniqueKey, and also replace distinct_pathkeys and group_pathkeys with\n> UniqueKeys.\n>\n>\nI can understand why we need EquivalenceClass for UniqueKey, but I can't\nunderstand why we need opfamily here.\n\n\nFor anyone who is interested with these patchsets, here is my plan about\nthis\nnow. 1). I will try EquivalenceClass rather than Expr in UniqueKey and\nadd opfamily\nif needed. 2). I will start a new thread to continue this topic. The\ncurrent thread is too long\nwhich may scare some people who may have interest in it. 3). I will give up\npatch 5 & 6\nfor now. one reason I am not happy with the current implementation, and\nthe other\nreason is I want to make the patchset smaller to make the reviewer easier.\nI will not\ngive up them forever, after the main part of this patchset is committed, I\nwill continue\nwith them in a new thread.\n\nThanks everyone for your input.\n\n-- \nBest Regards\nAndy Fan\n\nThank you Heikki for your attention. On Mon, Nov 30, 2020 at 11:20 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 30/11/2020 16:30, Jesper Pedersen wrote:\n> On 11/30/20 5:04 AM, Heikki Linnakangas wrote:\n>> On 26/11/2020 16:58, Andy Fan wrote:\n>>> This patch has stopped moving for a while,  any suggestion about\n>>> how to move on is appreciated.\n>>\n>> The question on whether UniqueKey.exprs should be a list of\n>> EquivalenceClasses or PathKeys is unresolved. I don't have an opinion\n>> on that, but I'd suggest that you pick one or the other and just go\n>> with it. If it turns out to be a bad choice, then we'll change it.\n> \n> In this case I think it is matter of deciding if we are going to use\n> EquivalenceClasses or Exprs before going further; there has been work\n> ongoing in this area for a while, so having a clear direction from a\n> committer would be greatly appreciated.\n\nPlain Exprs are not good enough, because you need to know which operator \nthe expression is unique on. Usually, it's the default = operator in the \ndefault btree opclass for the datatype, but it could be something else, too.Actually I can't understand this, could you explain more?  Based on my currentknowledge,  when we run \"SELECT DISTINCT a FROM t\",  we never care aboutwhich operator to use for the unique.   \nThere's some precedence for PathKeys, as we generate PathKeys to \nrepresent the DISTINCT column in PlannerInfo->distinct_pathkeys. On the \nother hand, I've always found it confusing that we use PathKeys to \nrepresent DISTINCT and GROUP BY, which are not actually sort orderings. OK, I have the same confusion  now:)   \nPerhaps it would  make sense to store EquivalenceClass+opfamily in \nUniqueKey, and also replace distinct_pathkeys and group_pathkeys with \nUniqueKeys.\nI can understand why we need EquivalenceClass for UniqueKey, but I can'tunderstand why we need opfamily here. For anyone who is interested with these patchsets, here is my plan about thisnow.  1).  I will try EquivalenceClass rather than Expr in UniqueKey and add opfamilyif needed. 2).  I will start a new thread to continue this topic. The current thread is too longwhich may scare some people who may have interest in it. 3). I will give up patch 5 & 6 for now.  one reason I am not happy with the current implementation, and the other reason is I want to make the patchset smaller to make the reviewer easier. I will notgive up them forever,  after the main part of this patchset is committed, I will continuewith them in a new thread.  Thanks everyone for your input. -- Best RegardsAndy Fan", "msg_date": "Sat, 5 Dec 2020 23:10:28 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "\nOn 05/12/2020 17:10, Andy Fan wrote:\n> Actually I can't understand this, could you explain more?  Based on my \n> current\n> knowledge,  when we run \"SELECT DISTINCT a FROM t\",  we never care about\n> which operator to use for the unique.\n\nSortGroupClause includes 'eqop' field, which determines the operator \nthat the expression needs to made unique with. The syntax doesn't let \nyou set it to anything else than the default btree opclass of the \ndatatype, though. But you can specify it for ORDER BY, and we use \nSortGroupClauses to represent both sorting and grouping.\n\nAlso, if you use the same struct to also represent columns that you know \nto be unique, and not just the DISTINCT clause in the query, then you \nneed the operator. For example, if you create a unique index on \nnon-default opfamily.\n\n> There's some precedence for PathKeys, as we generate PathKeys to\n> represent the DISTINCT column in PlannerInfo->distinct_pathkeys. On the\n> other hand, I've always found it confusing that we use PathKeys to\n> represent DISTINCT and GROUP BY, which are not actually sort orderings.\n> \n> \n> OK, I have the same confusion  now:)\n> \n> Perhaps it would  make sense to store EquivalenceClass+opfamily in\n> UniqueKey, and also replace distinct_pathkeys and group_pathkeys with\n> UniqueKeys.\n> \n> \n> I can understand why we need EquivalenceClass for UniqueKey, but I can't\n> understand why we need opfamily here.\n\nThinking a bit harder, I guess we don't. Because EquivalenceClass \nincludes the operator family already, in the ec_opfamilies field.\n\n> For anyone who is interested with these patchsets, here is my plan\n> about this now. 1). I will try EquivalenceClass rather than Expr in\n> UniqueKey and add opfamily if needed. 2). I will start a new thread\n> to continue this topic. The current thread is too long which may\n> scare some people who may have interest in it. 3). I will give up\n> patch 5 & 6 for now. one reason I am not happy with the current\n> implementation, and the other reason is I want to make the patchset\n> smaller to make the reviewer easier. I will not give up them forever,\n> after the main part of this patchset is committed, I will continue \n> with them in a new thread. Thanks everyone for your input.\nSounds like a plan.\n\n- Heikki\n\n\n", "msg_date": "Sat, 5 Dec 2020 20:40:33 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> I can understand why we need EquivalenceClass for UniqueKey, but I can't\n>> understand why we need opfamily here.\n\n> Thinking a bit harder, I guess we don't. Because EquivalenceClass \n> includes the operator family already, in the ec_opfamilies field.\n\nNo. EquivalenceClasses only care about equality, which is why they\nmight potentially mention several opfamilies that share an equality\noperator. If you care about sort order, you *cannot* rely on an\nEquivalenceClass to depict that. Now, abstract uniqueness also only\ncares about equality, but if you are going to implement it via sort-\nand-unique then you need to settle on a sort order.\n\nI agree we are overspecifying DISTINCT by settling on a sort operator at\nparse time, rather than considering all the possibilities at plan time.\nBut given that opfamilies sharing equality are mostly a hypothetical\nuse-case, I'm not in a big hurry to fix it. Before we had ASC/DESC\nindexes, there was a real use-case for making a \"reverse sort\" opclass,\nwith the same equality as the type's regular opclass but the opposite sort\norder. But that's ancient history now, and I've seen few other plausible\nuse-cases.\n\nI have not been following this thread closely enough to understand\nwhy we need a new \"UniqueKeys\" data structure at all. But if the\nmotivation is only to remove this overspecification, I humbly suggest\nthat it ain't worth the trouble.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 05 Dec 2020 15:40:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Thank you Tom and Heikki for your input.\n\nOn Sun, Dec 6, 2020 at 4:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Heikki Linnakangas <hlinnaka@iki.fi> writes:\n> >> I can understand why we need EquivalenceClass for UniqueKey, but I can't\n> >> understand why we need opfamily here.\n>\n> > Thinking a bit harder, I guess we don't. Because EquivalenceClass\n> > includes the operator family already, in the ec_opfamilies field.\n>\n> No. EquivalenceClasses only care about equality, which is why they\n> might potentially mention several opfamilies that share an equality\n> operator. If you care about sort order, you *cannot* rely on an\n> EquivalenceClass to depict that. Now, abstract uniqueness also only\n> cares about equality, but if you are going to implement it via sort-\n> and-unique then you need to settle on a sort order.\n>\n\nI think UniqueKey only cares about equality. Even DISTINCT / groupBy\ncan be implemented with sort, but UniqueKey only care about the result\nof DISTINCT/GROUPBY, so it doesn't matter IIUC.\n\n\n>\n> I agree we are overspecifying DISTINCT by settling on a sort operator at\n> parse time, rather than considering all the possibilities at plan time.\n> But given that opfamilies sharing equality are mostly a hypothetical\n> use-case, I'm not in a big hurry to fix it. Before we had ASC/DESC\n> indexes, there was a real use-case for making a \"reverse sort\" opclass,\n> with the same equality as the type's regular opclass but the opposite sort\n> order. But that's ancient history now, and I've seen few other plausible\n> use-cases.\n>\n> I have not been following this thread closely enough to understand\n> why we need a new \"UniqueKeys\" data structure at all.\n\n\nCurrently the UniqueKey is defined as a List of Expr, rather than\nEquivalenceClasses.\nA complete discussion until now can be found at [1] (The messages I replied\nto also\ncare a lot and the information is completed). This patch has stopped at\nthis place for\na while, I'm planning to try EquivalenceClasses, but any suggestion would\nbe welcome.\n\n\n> But if the\n> motivation is only to remove this overspecification, I humbly suggest\n> that it ain't worth the trouble.\n>\n> regards, tom lane\n>\n\n[1]\nhttps://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan\n\nThank you Tom and Heikki for your input. On Sun, Dec 6, 2020 at 4:40 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Heikki Linnakangas <hlinnaka@iki.fi> writes:\n>> I can understand why we need EquivalenceClass for UniqueKey, but I can't\n>> understand why we need opfamily here.\n\n> Thinking a bit harder, I guess we don't. Because EquivalenceClass \n> includes the operator family already, in the ec_opfamilies field.\n\nNo.  EquivalenceClasses only care about equality, which is why they\nmight potentially mention several opfamilies that share an equality\noperator.  If you care about sort order, you *cannot* rely on an\nEquivalenceClass to depict that.  Now, abstract uniqueness also only\ncares about equality, but if you are going to implement it via sort-\nand-unique then you need to settle on a sort order.I think UniqueKey only cares about equality.   Even DISTINCT / groupBycan be implemented with sort,  but UniqueKey only care about the resultof DISTINCT/GROUPBY,  so it doesn't matter IIUC.  \n\nI agree we are overspecifying DISTINCT by settling on a sort operator at\nparse time, rather than considering all the possibilities at plan time.\nBut given that opfamilies sharing equality are mostly a hypothetical\nuse-case, I'm not in a big hurry to fix it.  Before we had ASC/DESC\nindexes, there was a real use-case for making a \"reverse sort\" opclass,\nwith the same equality as the type's regular opclass but the opposite sort\norder.  But that's ancient history now, and I've seen few other plausible\nuse-cases.\n\nI have not been following this thread closely enough to understand\nwhy we need a new \"UniqueKeys\" data structure at all. Currently the UniqueKey is defined as a List of Expr, rather than EquivalenceClasses. A complete discussion until now can be found at [1] (The messages I replied to also care a lot and the information is completed). This patch has stopped at this place fora while,  I'm planning to try EquivalenceClasses,  but any suggestion would be welcome.   But if the\nmotivation is only to remove this overspecification, I humbly suggest\nthat it ain't worth the trouble.\n\n                        regards, tom lane\n[1] https://www.postgresql.org/message-id/CAKU4AWqy3Uv67%3DPR8RXG6LVoO-cMEwfW_LMwTxHdGrnu%2Bcf%2BdA%40mail.gmail.com -- Best RegardsAndy Fan", "msg_date": "Sun, 6 Dec 2020 11:38:55 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi,\n\nOn 12/5/20 10:38 PM, Andy Fan wrote:\n> Currently the UniqueKey is defined as a List of Expr, rather than\n> EquivalenceClasses.\n> A complete discussion until now can be found at [1] (The messages I replied\n> to also\n> care a lot and the information is completed). This patch has stopped at\n> this place for\n> a while, I'm planning to try EquivalenceClasses, but any suggestion would\n> be welcome.\n> \n> \n\nUnfortunately I think we need a RfC style patch of both versions in \ntheir minimum implementation.\n\nHopefully this will make it easier for one or more committers to decide \non the right direction since they can do a side-by-side comparison of \nthe two solutions.\n\nJust my $0.02.\n\nThanks for working on this Andy !\n\nBest regards,\n Jesper\n\n\n\n", "msg_date": "Mon, 7 Dec 2020 03:15:51 -0500", "msg_from": "Jesper Pedersen <jesper.pedersen@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <jesper.pedersen@redhat.com>\nwrote:\n\n> Hi,\n>\n> On 12/5/20 10:38 PM, Andy Fan wrote:\n> > Currently the UniqueKey is defined as a List of Expr, rather than\n> > EquivalenceClasses.\n> > A complete discussion until now can be found at [1] (The messages I\n> replied\n> > to also\n> > care a lot and the information is completed). This patch has stopped at\n> > this place for\n> > a while, I'm planning to try EquivalenceClasses, but any suggestion\n> would\n> > be welcome.\n> >\n> >\n>\n> Unfortunately I think we need a RfC style patch of both versions in\n> their minimum implementation.\n>\n> Hopefully this will make it easier for one or more committers to decide\n> on the right direction since they can do a side-by-side comparison of\n> the two solutions.\n>\n>\nI do get the exact same idea. Actually I have made EquivalenceClasses\nworks with baserel last weekend and then I realized it is hard to compare\nthe 2 situations without looking into the real/Poc code, even for very\nexperienced people. I will submit a new patch after I get the partitioned\nrelation, subquery works. Hope I can make it in one week.\n\n\n> Just my $0.02.\n>\n> Thanks for working on this Andy !\n>\n> Best regards,\n> Jesper\n>\n>\n\n-- \nBest Regards\nAndy Fan\n\nOn Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <jesper.pedersen@redhat.com> wrote:Hi,\n\nOn 12/5/20 10:38 PM, Andy Fan wrote:\n> Currently the UniqueKey is defined as a List of Expr, rather than\n> EquivalenceClasses.\n> A complete discussion until now can be found at [1] (The messages I replied\n> to also\n> care a lot and the information is completed). This patch has stopped at\n> this place for\n> a while,  I'm planning to try EquivalenceClasses,  but any suggestion would\n> be welcome.\n> \n> \n\nUnfortunately I think we need a RfC style patch of both versions in \ntheir minimum implementation.\n\nHopefully this will make it easier for one or more committers to decide \non the right direction since they can do a side-by-side comparison of \nthe two solutions.\nI do get the exact same idea.  Actually I have made EquivalenceClassesworks with baserel last weekend and then I realized it is hard to comparethe 2 situations without looking into the real/Poc code, even for very experienced people.  I will submit a new patch after I get the partitionedrelation, subquery works.  Hope I can make it in one week.    \nJust my $0.02.\n\nThanks for working on this Andy !\n\nBest regards,\n  Jesper\n\n-- Best RegardsAndy Fan", "msg_date": "Mon, 7 Dec 2020 20:14:56 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sun, Dec 6, 2020 at 9:09 AM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>>\n>> I have not been following this thread closely enough to understand\n>> why we need a new \"UniqueKeys\" data structure at all.\n>\n>\n> Currently the UniqueKey is defined as a List of Expr, rather than EquivalenceClasses.\n> A complete discussion until now can be found at [1] (The messages I replied to also\n> care a lot and the information is completed). This patch has stopped at this place for\n> a while, I'm planning to try EquivalenceClasses, but any suggestion would be welcome.\n>\n>>\n>> But if the\n>> motivation is only to remove this overspecification, I humbly suggest\n>> that it ain't worth the trouble.\n\nAFAIK, the simple answer is we need some way to tell that certain\nexpressions together form a unique key for a given relation. E.g.\ngroup by clause forms a unique key for the output of GROUP BY.\nPathkeys have a stronger requirement that the relation is ordered on\nthat expression, which may not be the case with uniqueness e.g. output\nof GROUP BY produced by hash grouping. To me it's Pathkeys - ordering,\nso we could use Pathkeys with reduced strength. But that might affect\na lot of places which depend upon stronger pathkeys.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Mon, 7 Dec 2020 17:57:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sun, 6 Dec 2020 at 04:10, Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> For anyone who is interested with these patchsets, here is my plan about this\n> now. 1). I will try EquivalenceClass rather than Expr in UniqueKey and add opfamily\n> if needed.\n\nI agree that we should be storing them in EquivalenceClasses. Apart\nfrom what was mentioned already it also allow the optimisation to work\nin cases like:\n\ncreate table t (a int not null unique, b int);\nselect distinct b from t where a = b;\n\nDavid\n\n\n", "msg_date": "Wed, 9 Dec 2020 19:13:48 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Andy,\n\nOn Mon, Dec 7, 2020 at 9:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <jesper.pedersen@redhat.com> wrote:\n>>\n>> Hi,\n>>\n>> On 12/5/20 10:38 PM, Andy Fan wrote:\n>> > Currently the UniqueKey is defined as a List of Expr, rather than\n>> > EquivalenceClasses.\n>> > A complete discussion until now can be found at [1] (The messages I replied\n>> > to also\n>> > care a lot and the information is completed). This patch has stopped at\n>> > this place for\n>> > a while, I'm planning to try EquivalenceClasses, but any suggestion would\n>> > be welcome.\n>> >\n>> >\n>>\n>> Unfortunately I think we need a RfC style patch of both versions in\n>> their minimum implementation.\n>>\n>> Hopefully this will make it easier for one or more committers to decide\n>> on the right direction since they can do a side-by-side comparison of\n>> the two solutions.\n>>\n>\n> I do get the exact same idea. Actually I have made EquivalenceClasses\n> works with baserel last weekend and then I realized it is hard to compare\n> the 2 situations without looking into the real/Poc code, even for very\n> experienced people. I will submit a new patch after I get the partitioned\n> relation, subquery works. Hope I can make it in one week.\n\nStatus update for a commitfest entry.\n\nAre you planning to submit a new patch? Or is there any blocker for\nthis work? This patch entry on CF app has been in state Waiting on\nAuthor for a while. If there is any update on that, please reflect on\nCF app.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 22 Jan 2021 22:14:40 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "Hi Masahiko:\n\nOn Fri, Jan 22, 2021 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com>\nwrote:\n\n> Hi Andy,\n>\n> On Mon, Dec 7, 2020 at 9:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n> >\n> >\n> >\n> > On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <\n> jesper.pedersen@redhat.com> wrote:\n> >>\n> >> Hi,\n> >>\n> >> On 12/5/20 10:38 PM, Andy Fan wrote:\n> >> > Currently the UniqueKey is defined as a List of Expr, rather than\n> >> > EquivalenceClasses.\n> >> > A complete discussion until now can be found at [1] (The messages I\n> replied\n> >> > to also\n> >> > care a lot and the information is completed). This patch has stopped\n> at\n> >> > this place for\n> >> > a while, I'm planning to try EquivalenceClasses, but any suggestion\n> would\n> >> > be welcome.\n> >> >\n> >> >\n> >>\n> >> Unfortunately I think we need a RfC style patch of both versions in\n> >> their minimum implementation.\n> >>\n> >> Hopefully this will make it easier for one or more committers to decide\n> >> on the right direction since they can do a side-by-side comparison of\n> >> the two solutions.\n> >>\n> >\n> > I do get the exact same idea. Actually I have made EquivalenceClasses\n> > works with baserel last weekend and then I realized it is hard to compare\n> > the 2 situations without looking into the real/Poc code, even for very\n> > experienced people. I will submit a new patch after I get the\n> partitioned\n> > relation, subquery works. Hope I can make it in one week.\n>\n> Status update for a commitfest entry.\n>\n> Are you planning to submit a new patch? Or is there any blocker for\n> this work? This patch entry on CF app has been in state Waiting on\n> Author for a while. If there is any update on that, please reflect on\n> CF app.\n>\n>\n> I agree that the current status is \"Waiting on author\", and no block\nissue for others.\nI plan to work on this in 1 month. I have to get my current urgent case\ncompleted first.\nSorry for the delay action and thanks for asking.\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nHi Masahiko:On Fri, Jan 22, 2021 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:Hi Andy,\n\nOn Mon, Dec 7, 2020 at 9:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <jesper.pedersen@redhat.com> wrote:\n>>\n>> Hi,\n>>\n>> On 12/5/20 10:38 PM, Andy Fan wrote:\n>> > Currently the UniqueKey is defined as a List of Expr, rather than\n>> > EquivalenceClasses.\n>> > A complete discussion until now can be found at [1] (The messages I replied\n>> > to also\n>> > care a lot and the information is completed). This patch has stopped at\n>> > this place for\n>> > a while,  I'm planning to try EquivalenceClasses,  but any suggestion would\n>> > be welcome.\n>> >\n>> >\n>>\n>> Unfortunately I think we need a RfC style patch of both versions in\n>> their minimum implementation.\n>>\n>> Hopefully this will make it easier for one or more committers to decide\n>> on the right direction since they can do a side-by-side comparison of\n>> the two solutions.\n>>\n>\n> I do get the exact same idea.  Actually I have made EquivalenceClasses\n> works with baserel last weekend and then I realized it is hard to compare\n> the 2 situations without looking into the real/Poc code, even for very\n> experienced people.  I will submit a new patch after I get the partitioned\n> relation, subquery works.  Hope I can make it in one week.\n\nStatus update for a commitfest entry.\n\nAre you planning to submit a new patch? Or is there any blocker for\nthis work? This patch entry on CF app has been in state Waiting on\nAuthor for a while. If there is any update on that, please reflect on\nCF app.\nI agree that  the current status is \"Waiting on author\",  and no block issue for others. I plan to work on this in 1 month.  I have to get my current urgent case completed first.Sorry for the delay action and thanks for asking.  -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Sun, 24 Jan 2021 18:26:33 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" }, { "msg_contents": "On Sun, Jan 24, 2021 at 6:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n\n> Hi Masahiko:\n>\n> On Fri, Jan 22, 2021 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com>\n> wrote:\n>\n>> Hi Andy,\n>>\n>> On Mon, Dec 7, 2020 at 9:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>> >\n>> >\n>> >\n>> > On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <\n>> jesper.pedersen@redhat.com> wrote:\n>> >>\n>> >> Hi,\n>> >>\n>> >> On 12/5/20 10:38 PM, Andy Fan wrote:\n>> >> > Currently the UniqueKey is defined as a List of Expr, rather than\n>> >> > EquivalenceClasses.\n>> >> > A complete discussion until now can be found at [1] (The messages I\n>> replied\n>> >> > to also\n>> >> > care a lot and the information is completed). This patch has stopped\n>> at\n>> >> > this place for\n>> >> > a while, I'm planning to try EquivalenceClasses, but any\n>> suggestion would\n>> >> > be welcome.\n>> >> >\n>> >> >\n>> >>\n>> >> Unfortunately I think we need a RfC style patch of both versions in\n>> >> their minimum implementation.\n>> >>\n>> >> Hopefully this will make it easier for one or more committers to decide\n>> >> on the right direction since they can do a side-by-side comparison of\n>> >> the two solutions.\n>> >>\n>> >\n>> > I do get the exact same idea. Actually I have made EquivalenceClasses\n>> > works with baserel last weekend and then I realized it is hard to\n>> compare\n>> > the 2 situations without looking into the real/Poc code, even for very\n>> > experienced people. I will submit a new patch after I get the\n>> partitioned\n>> > relation, subquery works. Hope I can make it in one week.\n>>\n>> Status update for a commitfest entry.\n>>\n>> Are you planning to submit a new patch? Or is there any blocker for\n>> this work? This patch entry on CF app has been in state Waiting on\n>> Author for a while. If there is any update on that, please reflect on\n>> CF app.\n>>\n>>\n>> I agree that the current status is \"Waiting on author\", and no block\n> issue for others.\n> I plan to work on this in 1 month. I have to get my current urgent case\n> completed first.\n> Sorry for the delay action and thanks for asking.\n>\n>\n>\nI'd start to continue this work today. At the same time, I will split the\nmulti-patch series\ninto some dedicated small chunks for easier review. The first one is just\nfor adding a\nnotnullattrs in RelOptInfo struct, in thread [1].\n\nhttps://www.postgresql.org/message-id/flat/CAKU4AWpQjAqJwQ2X-aR9g3%2BZHRzU1k8hNP7A%2B_mLuOv-n5aVKA%40mail.gmail.com\n\n\n-- \nBest Regards\nAndy Fan (https://www.aliyun.com/)\n\nOn Sun, Jan 24, 2021 at 6:26 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:Hi Masahiko:On Fri, Jan 22, 2021 at 9:15 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:Hi Andy,\n\nOn Mon, Dec 7, 2020 at 9:15 PM Andy Fan <zhihui.fan1213@gmail.com> wrote:\n>\n>\n>\n> On Mon, Dec 7, 2020 at 4:16 PM Jesper Pedersen <jesper.pedersen@redhat.com> wrote:\n>>\n>> Hi,\n>>\n>> On 12/5/20 10:38 PM, Andy Fan wrote:\n>> > Currently the UniqueKey is defined as a List of Expr, rather than\n>> > EquivalenceClasses.\n>> > A complete discussion until now can be found at [1] (The messages I replied\n>> > to also\n>> > care a lot and the information is completed). This patch has stopped at\n>> > this place for\n>> > a while,  I'm planning to try EquivalenceClasses,  but any suggestion would\n>> > be welcome.\n>> >\n>> >\n>>\n>> Unfortunately I think we need a RfC style patch of both versions in\n>> their minimum implementation.\n>>\n>> Hopefully this will make it easier for one or more committers to decide\n>> on the right direction since they can do a side-by-side comparison of\n>> the two solutions.\n>>\n>\n> I do get the exact same idea.  Actually I have made EquivalenceClasses\n> works with baserel last weekend and then I realized it is hard to compare\n> the 2 situations without looking into the real/Poc code, even for very\n> experienced people.  I will submit a new patch after I get the partitioned\n> relation, subquery works.  Hope I can make it in one week.\n\nStatus update for a commitfest entry.\n\nAre you planning to submit a new patch? Or is there any blocker for\nthis work? This patch entry on CF app has been in state Waiting on\nAuthor for a while. If there is any update on that, please reflect on\nCF app.\nI agree that  the current status is \"Waiting on author\",  and no block issue for others. I plan to work on this in 1 month.  I have to get my current urgent case completed first.Sorry for the delay action and thanks for asking.  I'd start to continue this work today. At the same time, I will split the multi-patch seriesinto some dedicated small chunks for easier review.  The first one is just for adding anotnullattrs in RelOptInfo struct,  in thread [1].https://www.postgresql.org/message-id/flat/CAKU4AWpQjAqJwQ2X-aR9g3%2BZHRzU1k8hNP7A%2B_mLuOv-n5aVKA%40mail.gmail.com -- Best RegardsAndy Fan (https://www.aliyun.com/)", "msg_date": "Thu, 11 Feb 2021 10:05:07 +0800", "msg_from": "Andy Fan <zhihui.fan1213@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Keeps tracking the uniqueness with UniqueKey" } ]
[ { "msg_contents": "When looking at something different, I happened to notice that pg_dump is a bit\ninconsistent in how it qualifies casts to pg_catalog entities like regclass and\noid. Most casts are qualified, but not all. Even though it functionally is\nthe same, being consistent is a good thing IMO and I can't see a reason not to,\nso the attached patch adds qualifications (the unqualified regclass cast in the\nTAP test left on purpose).\n\ncheers ./daniel", "msg_date": "Mon, 23 Mar 2020 11:23:08 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Unqualified pg_catalog casts in pg_dump" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> When looking at something different, I happened to notice that pg_dump is a bit\n> inconsistent in how it qualifies casts to pg_catalog entities like regclass and\n> oid. Most casts are qualified, but not all. Even though it functionally is\n> the same, being consistent is a good thing IMO and I can't see a reason not to,\n> so the attached patch adds qualifications (the unqualified regclass cast in the\n> TAP test left on purpose).\n\nWhile this used to be important before we made pg_dump force a minimal\nsearch_path, I'm not sure that there's any point in being picky about\nit anymore. (psql's describe.c is a different story though.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:54:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Unqualified pg_catalog casts in pg_dump" }, { "msg_contents": "> On 23 Mar 2020, at 17:54, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> When looking at something different, I happened to notice that pg_dump is a bit\n>> inconsistent in how it qualifies casts to pg_catalog entities like regclass and\n>> oid. Most casts are qualified, but not all. Even though it functionally is\n>> the same, being consistent is a good thing IMO and I can't see a reason not to,\n>> so the attached patch adds qualifications (the unqualified regclass cast in the\n>> TAP test left on purpose).\n> \n> While this used to be important before we made pg_dump force a minimal\n> search_path, I'm not sure that there's any point in being picky about\n> it anymore. (psql's describe.c is a different story though.)\n\nCorrect, there is no functional importance with this. IMO the value is in\nreadability and grep-ability.\n\ncheers ./daniel\n\n", "msg_date": "Mon, 23 Mar 2020 17:57:37 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Unqualified pg_catalog casts in pg_dump" }, { "msg_contents": "On Mon, Mar 23, 2020 at 05:57:37PM +0100, Daniel Gustafsson wrote:\n> Correct, there is no functional importance with this. IMO the value is in\n> readability and grep-ability.\n\nThis may cause extra conflicts when back-patching.\n--\nMichael", "msg_date": "Tue, 24 Mar 2020 14:30:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Unqualified pg_catalog casts in pg_dump" } ]
[ { "msg_contents": "Hi\n\nI try to search notice about it, to get info about release date of this\nfeature, but I cannot find it.\n\nRegards\n\nPavel\n\nHiI try to search notice about it, to get info about release date of this feature, but I cannot find it.RegardsPavel", "msg_date": "Mon, 23 Mar 2020 13:43:28 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "is somewhere documented x LIKE ANY(ARRAY)?" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n\n> Hi\n>\n> I try to search notice about it, to get info about release date of this\n> feature, but I cannot find it.\n\nIt's documented in\nhttps://www.postgresql.org/docs/current/functions-comparisons.html, and\nhas been around since at least 7.4.\n\n> Regards\n>\n> Pavel\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl\n\n\n", "msg_date": "Mon, 23 Mar 2020 12:54:31 +0000", "msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)", "msg_from_op": false, "msg_subject": "Re: is somewhere documented x LIKE ANY(ARRAY)?" }, { "msg_contents": "po 23. 3. 2020 v 13:54 odesílatel Dagfinn Ilmari Mannsåker <\nilmari@ilmari.org> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>\n> > Hi\n> >\n> > I try to search notice about it, to get info about release date of this\n> > feature, but I cannot find it.\n>\n> It's documented in\n> https://www.postgresql.org/docs/current/functions-comparisons.html, and\n> has been around since at least 7.4.\n>\n\nMy customer reports some issues on Postgres 9.3.\n\n\n\n> > Regards\n> >\n> > Pavel\n>\n> - ilmari\n> --\n> - Twitter seems more influential [than blogs] in the 'gets reported in\n> the mainstream press' sense at least. - Matt McLeod\n> - That'd be because the content of a tweet is easier to condense down\n> to a mainstream media article. - Calle Dybedahl\n>\n\npo 23. 3. 2020 v 13:54 odesílatel Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n\n> Hi\n>\n> I try to search notice about it, to get info about release date of this\n> feature, but I cannot find it.\n\nIt's documented in\nhttps://www.postgresql.org/docs/current/functions-comparisons.html, and\nhas been around since at least 7.4.My customer reports some issues on Postgres 9.3. \n\n> Regards\n>\n> Pavel\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n  the mainstream press' sense at least.               - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n  to a mainstream media article.                      - Calle Dybedahl", "msg_date": "Mon, 23 Mar 2020 14:01:04 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: is somewhere documented x LIKE ANY(ARRAY)?" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> po 23. 3. 2020 v 13:54 odesílatel Dagfinn Ilmari Mannsåker <\n> ilmari@ilmari.org> napsal:\n>> It's documented in\n>> https://www.postgresql.org/docs/current/functions-comparisons.html, and\n>> has been around since at least 7.4.\n\nWell, to be fair, we don't really say anywhere that LIKE acts enough\nlike a plain operator to be used in this syntax. And the underlying\ncode is the subquery_Op production in gram.y, which is specific to\nthis syntax, so I'm not sure offhand to what extent LIKE acts like\nan operator for other corner cases.\n\n> My customer reports some issues on Postgres 9.3.\n\nDoesn't look to me like subquery_Op has changed much since 2004,\nso you'd really need to be more specific.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 10:38:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: is somewhere documented x LIKE ANY(ARRAY)?" } ]
[ { "msg_contents": "Hi all,\n\nWith the following statements on latest master (c81bd3b9), I find\nnegative cost for plan nodes.\n\ncreate table a (i int, j int);\ninsert into a select i%100000, i from generate_series(1,1000000)i;\nanalyze a;\n\n# explain select i from a group by i;\n QUERY PLAN\n-----------------------------------------------------------------\n HashAggregate (cost=1300.00..-1585.82 rows=102043 width=4)\n Group Key: i\n Planned Partitions: 4\n -> Seq Scan on a (cost=0.00..14425.00 rows=1000000 width=4)\n(4 rows)\n\nIn function cost_agg, when we add the disk costs of hash aggregation\nthat spills to disk, nbatches is calculated as 1.18 in this case. It is\ngreater than 1, so there will be spill. And the depth is calculated as\n-1 in this case, with num_partitions being 4. I think this is where\nthing goes wrong.\n\nThanks\nRichard\n\nHi all,With the following statements on latest master (c81bd3b9), I findnegative cost for plan nodes.create table a (i int, j int);insert into a select i%100000, i from generate_series(1,1000000)i;analyze a;# explain select i from a group by i;                           QUERY PLAN----------------------------------------------------------------- HashAggregate  (cost=1300.00..-1585.82 rows=102043 width=4)   Group Key: i   Planned Partitions: 4   ->  Seq Scan on a  (cost=0.00..14425.00 rows=1000000 width=4)(4 rows)In function cost_agg, when we add the disk costs of hash aggregationthat spills to disk, nbatches is calculated as 1.18 in this case. It isgreater than 1, so there will be spill. And the depth is calculated as-1 in this case, with num_partitions being 4. I think this is wherething goes wrong.ThanksRichard", "msg_date": "Mon, 23 Mar 2020 21:13:48 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Negative cost is seen for plan node" }, { "msg_contents": "At Mon, 23 Mar 2020 21:13:48 +0800, Richard Guo <guofenglinux@gmail.com> wrote in \n> Hi all,\n> \n> With the following statements on latest master (c81bd3b9), I find\n> negative cost for plan nodes.\n> \n> create table a (i int, j int);\n> insert into a select i%100000, i from generate_series(1,1000000)i;\n> analyze a;\n> \n> # explain select i from a group by i;\n> QUERY PLAN\n> -----------------------------------------------------------------\n> HashAggregate (cost=1300.00..-1585.82 rows=102043 width=4)\n\nGood catch!\n\n> Group Key: i\n> Planned Partitions: 4\n> -> Seq Scan on a (cost=0.00..14425.00 rows=1000000 width=4)\n> (4 rows)\n> \n> In function cost_agg, when we add the disk costs of hash aggregation\n> that spills to disk, nbatches is calculated as 1.18 in this case. It is\n> greater than 1, so there will be spill. And the depth is calculated as\n> -1 in this case, with num_partitions being 4. I think this is where\n> thing goes wrong.\n\nThe depth is the expected number of iterations of reading the relation.\n\n>\tdepth = ceil( log(nbatches - 1) / log(num_partitions) );\n\nI'm not sure what the expression based on, but apparently it is wrong\nfor nbatches <= 2.0. It looks like a thinko of something like this.\n\n\tdepth = ceil( log(nbatches) / log(num_partitions + 1) );\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n", "msg_date": "Tue, 24 Mar 2020 13:00:59 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Negative cost is seen for plan node" }, { "msg_contents": "On Tue, Mar 24, 2020 at 12:01 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Mon, 23 Mar 2020 21:13:48 +0800, Richard Guo <guofenglinux@gmail.com>\n> wrote in\n> > Hi all,\n> >\n> > With the following statements on latest master (c81bd3b9), I find\n> > negative cost for plan nodes.\n> >\n> > create table a (i int, j int);\n> > insert into a select i%100000, i from generate_series(1,1000000)i;\n> > analyze a;\n> >\n> > # explain select i from a group by i;\n> > QUERY PLAN\n> > -----------------------------------------------------------------\n> > HashAggregate (cost=1300.00..-1585.82 rows=102043 width=4)\n>\n> Good catch!\n>\n> > Group Key: i\n> > Planned Partitions: 4\n> > -> Seq Scan on a (cost=0.00..14425.00 rows=1000000 width=4)\n> > (4 rows)\n> >\n> > In function cost_agg, when we add the disk costs of hash aggregation\n> > that spills to disk, nbatches is calculated as 1.18 in this case. It is\n> > greater than 1, so there will be spill. And the depth is calculated as\n> > -1 in this case, with num_partitions being 4. I think this is where\n> > thing goes wrong.\n>\n> The depth is the expected number of iterations of reading the relation.\n>\n> > depth = ceil( log(nbatches - 1) / log(num_partitions) );\n>\n\nYes correct.\n\n\n>\n> I'm not sure what the expression based on, but apparently it is wrong\n> for nbatches <= 2.0. It looks like a thinko of something like this.\n>\n> depth = ceil( log(nbatches) / log(num_partitions + 1) );\n>\n\nIt seems to me we should use '(nbatches - 1)', without the log function.\nMaybe I'm wrong.\n\nI have sent this issue to the 'Memory-Bounded Hash Aggregation' thread.\n\nThanks\nRichard\n\nOn Tue, Mar 24, 2020 at 12:01 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Mon, 23 Mar 2020 21:13:48 +0800, Richard Guo <guofenglinux@gmail.com> wrote in \n> Hi all,\n> \n> With the following statements on latest master (c81bd3b9), I find\n> negative cost for plan nodes.\n> \n> create table a (i int, j int);\n> insert into a select i%100000, i from generate_series(1,1000000)i;\n> analyze a;\n> \n> # explain select i from a group by i;\n>                            QUERY PLAN\n> -----------------------------------------------------------------\n>  HashAggregate  (cost=1300.00..-1585.82 rows=102043 width=4)\n\nGood catch!\n\n>    Group Key: i\n>    Planned Partitions: 4\n>    ->  Seq Scan on a  (cost=0.00..14425.00 rows=1000000 width=4)\n> (4 rows)\n> \n> In function cost_agg, when we add the disk costs of hash aggregation\n> that spills to disk, nbatches is calculated as 1.18 in this case. It is\n> greater than 1, so there will be spill. And the depth is calculated as\n> -1 in this case, with num_partitions being 4. I think this is where\n> thing goes wrong.\n\nThe depth is the expected number of iterations of reading the relation.\n\n>       depth = ceil( log(nbatches - 1) / log(num_partitions) );Yes correct. \n\nI'm not sure what the expression based on, but apparently it is wrong\nfor nbatches <= 2.0. It looks like a thinko of something like this.\n\n        depth = ceil( log(nbatches) / log(num_partitions + 1) );It seems to me we should use '(nbatches - 1)', without the log function.Maybe I'm wrong.I have sent this issue to the 'Memory-Bounded Hash Aggregation' thread.ThanksRichard", "msg_date": "Thu, 26 Mar 2020 18:06:45 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Negative cost is seen for plan node" } ]
[ { "msg_contents": "Many people enjoy the Windows testing the cfbot runs on the AppVeyor \nservice.\n\nYou can also run this yourself without the detour through the commit \nfest app. Attached are three patches that add .appveyor.yml files, for \nMSVC, MinGW, and Cygwin respectively. (An open problem is to combine \nthem all into one.) I have been using these regularly over the last few \nmonths to test code on these Windows variants.\n\nTo use them, first you need to set up your AppVeyor account and link it \nto a github (or gitlab or ...) repository. Then git am the patch on top \nof some branch, push to github (or ...) and watch it build.\n\nThis is just for individual enjoyment; I don't mean to commit them at \nthis time.\n\n(Some of this has been cribbed from the cfbot work and many other places \nall over the web.)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 23 Mar 2020 17:05:33 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "some AppVeyor files" }, { "msg_contents": "On Tue, Mar 24, 2020 at 5:05 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> You can also run this yourself without the detour through the commit\n> fest app. Attached are three patches that add .appveyor.yml files, for\n> MSVC, MinGW, and Cygwin respectively. (An open problem is to combine\n> them all into one.) I have been using these regularly over the last few\n> months to test code on these Windows variants.\n\nThanks! I added a link to this thread to a Wiki page that tries to\ncollect information on this topic[1]. Another thing you could be\ninterested in is the ability to test on several different MSVC\nversions (I tried to find some appveyor.yml files I had around here\nsomewhere to do that, but no cigar... it's just different paths for\nthose .bat files that set up the environment).\n\nHere is my current wish list for Windows CI:\n\n1. Run check-world with tap tests.\n2. Turn on the equivalent of -Werror (maybe).\n3. Turn on asserts.\n4. Print backtraces on crash.\n5. Dump all potentially relevant logs on failure (initdb.log,\nregression.diff etc).\n6. Find a Windows thing that is like ccache and preserve its cache\nacross builds (like Travis, which saves some build time).\n\n[1] https://wiki.postgresql.org/wiki/Continuous_Integration\n\n\n", "msg_date": "Wed, 25 Mar 2020 15:27:06 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: some AppVeyor files" }, { "msg_contents": "On Tue, Mar 24, 2020 at 10:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks! I added a link to this thread to a Wiki page that tries to\n> collect information on this topic[1]. Another thing you could be\n> interested in is the ability to test on several different MSVC\n> versions (I tried to find some appveyor.yml files I had around here\n> somewhere to do that, but no cigar... it's just different paths for\n> those .bat files that set up the environment).\n>\n> Here is my current wish list for Windows CI:\n>\n> 1. Run check-world with tap tests.\n> 2. Turn on the equivalent of -Werror (maybe).\n> 3. Turn on asserts.\n> 4. Print backtraces on crash.\n> 5. Dump all potentially relevant logs on failure (initdb.log,\n> regression.diff etc).\n\nFWIW, you can RDP to the AppVeyor instance by setting an appveyor\npassword and pausing the machine on failure[1]. I played around with\nsetting the registry key to add localdumps for postgres with the\nintent of feeding those to procdump. I didn't have any success with\ngenerating dump files, but that also could have been because I wasn't\ngetting actual crashes.\n\nAt any rate, getting into the machine is useful in general for getting\nmore post-build/post-failure information, in case you weren't\nconfigured for it already.\n\n[1] https://www.appveyor.com/docs/how-to/rdp-to-build-worker/\n\nThanks,\n-- \nMike Palmiotto\nhttps://crunchydata.com\n\n\n", "msg_date": "Fri, 27 Mar 2020 16:30:47 -0400", "msg_from": "Mike Palmiotto <mike.palmiotto@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: some AppVeyor files" } ]
[ { "msg_contents": "Hello\n\nWhile messing with EXPLAIN on a query emitted by pg_dump, I noticed that\ncurrent Postgres 10 emits weird bucket/batch/memory values for certain\nhash nodes:\n\n -> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n -> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n\nIt shows normal values in 9.6.\n\nThe complete query is:\n\nSELECT c.tableoid, c.oid, c.relname, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(c.relacl,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::\"char\",c.relowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::\"char\",c.relowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) AS relacl, (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::\"char\",c.relowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(c.relacl,pg_catalog.acldefault(CASE WHEN c.relkind = 'S' THEN 's' ELSE 'r' END::\"char\",c.relowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) as rrelacl, NULL AS initrelacl, NULL as initrrelacl, c.relkind, c.relnamespace, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = c.relowner) AS rolname, c.relchecks, c.relhastriggers, c.relhasindex, c.relhasrules, 'f'::bool AS relhasoids, c.relrowsecurity, c.relforcerowsecurity, c.relfrozenxid, c.relminmxid, tc.oid AS toid, tc.relfrozenxid AS tfrozenxid, tc.relminmxid AS tminmxid, c.relpersistence, c.relispopulated, c.relreplident, c.relpages, am.amname, CASE WHEN c.reloftype <> 0 THEN c.reloftype::pg_catalog.regtype ELSE NULL END AS reloftype, d.refobjid AS owning_tab, d.refobjsubid AS owning_col, (SELECT spcname FROM pg_tablespace t WHERE t.oid = c.reltablespace) AS reltablespace, array_remove(array_remove(c.reloptions,'check_option=local'),'check_option=cascaded') AS reloptions, CASE WHEN 'check_option=local' = ANY (c.reloptions) THEN 'LOCAL'::text WHEN 'check_option=cascaded' = ANY (c.reloptions) THEN 'CASCADED'::text ELSE NULL END AS checkoption, tc.reloptions AS toast_reloptions, c.relkind = 'S' AND EXISTS (SELECT 1 FROM pg_depend WHERE classid = 'pg_class'::regclass AND objid = c.oid AND objsubid = 0 AND refclassid = 'pg_class'::regclass AND deptype = 'i') AS is_identity_sequence, EXISTS (SELECT 1 FROM pg_attribute at LEFT JOIN pg_init_privs pip ON (c.oid = pip.objoid AND pip.classoid = 'pg_class'::regclass AND pip.objsubid = at.attnum)WHERE at.attrelid = c.oid AND ((SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(at.attacl,pg_catalog.acldefault('c',c.relowner))) WITH ORDINALITY AS perm(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('c',c.relowner))) AS init(init_acl) WHERE acl = init_acl)) as foo) IS NOT NULL OR (SELECT pg_catalog.array_agg(acl ORDER BY row_n) FROM (SELECT acl, row_n FROM pg_catalog.unnest(coalesce(pip.initprivs,pg_catalog.acldefault('c',c.relowner))) WITH ORDINALITY AS initp(acl,row_n) WHERE NOT EXISTS ( SELECT 1 FROM pg_catalog.unnest(coalesce(at.attacl,pg_catalog.acldefault('c',c.relowner))) AS permp(orig_acl) WHERE acl = orig_acl)) as foo) IS NOT NULL OR NULL IS NOT NULL OR NULL IS NOT NULL))AS changed_acl, pg_get_partkeydef(c.oid) AS partkeydef, c.relispartition AS ispartition, pg_get_expr(c.relpartbound, c.oid) AS partbound FROM pg_class c LEFT JOIN pg_depend d ON (c.relkind = 'S' AND d.classid = c.tableoid AND d.objid = c.oid AND d.objsubid = 0 AND d.refclassid = c.tableoid AND d.deptype IN ('a', 'i')) LEFT JOIN pg_class tc ON (c.reltoastrelid = tc.oid AND c.relkind <> 'p') LEFT JOIN pg_am am ON (c.relam = am.oid) LEFT JOIN pg_init_privs pip ON (c.oid = pip.objoid AND pip.classoid = 'pg_class'::regclass AND pip.objsubid = 0) WHERE c.relkind in ('r', 'S', 'v', 'c', 'm', 'f', 'p') ORDER BY c.oid\n\nI'm not looking into this right now. If somebody is bored in\nquarantine, they might have a good time bisecting this.\n\n-- \nÁlvaro Herrera\n\n\n", "msg_date": "Mon, 23 Mar 2020 13:50:59 -0300", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "weird hash plan cost, starting with pg10" }, { "msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> While messing with EXPLAIN on a query emitted by pg_dump, I noticed that\n> current Postgres 10 emits weird bucket/batch/memory values for certain\n> hash nodes:\n\n> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n> -> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n\nLooks suspiciously like uninitialized memory ...\n\n> The complete query is:\n\nReproduces here, though oddly only a couple of the several hash subplans\nare doing that.\n\nI'm not planning to dig into it right this second either.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 23 Mar 2020 13:00:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > While messing with EXPLAIN on a query emitted by pg_dump, I noticed that\n> > current Postgres 10 emits weird bucket/batch/memory values for certain\n> > hash nodes:\n>\n> > -> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> > Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n> > -> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n>\n> Looks suspiciously like uninitialized memory ...\n\nI think \"hashtable\" might have been pfree'd before\nExecHashGetInstrumentation() ran, because those numbers look like\nCLOBBER_FREED_MEMORY's pattern:\n\n>>> hex(2139062143)\n'0x7f7f7f7f'\n>>> hex(8971876904722400 / 1024)\n'0x7f7f7f7f7f7'\n\nMaybe there is something wrong with the shutdown order of nested subplans.\n\n\n", "msg_date": "Tue, 24 Mar 2020 09:55:11 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Tue, Mar 24, 2020 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > While messing with EXPLAIN on a query emitted by pg_dump, I noticed that\n> > > current Postgres 10 emits weird bucket/batch/memory values for certain\n> > > hash nodes:\n> >\n> > > -> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> > > Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n> > > -> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n> >\n> > Looks suspiciously like uninitialized memory ...\n>\n> I think \"hashtable\" might have been pfree'd before\n> ExecHashGetInstrumentation() ran, because those numbers look like\n> CLOBBER_FREED_MEMORY's pattern:\n>\n> >>> hex(2139062143)\n> '0x7f7f7f7f'\n> >>> hex(8971876904722400 / 1024)\n> '0x7f7f7f7f7f7'\n>\n> Maybe there is something wrong with the shutdown order of nested subplans.\n\nI think there might be a case like this:\n\n* ExecRescanHashJoin() decides it can't reuse the hash table for a\nrescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\nhj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n* the HashState node still has a reference to the pfree'd HashJoinTable!\n* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so\nit doesn't bother to build a new hash table\n* EXPLAIN examines the HashState's pointer to a freed HashJoinTable struct\n\nYou could fix the dangling pointer problem by clearing it, but then\nyou'd have no data for EXPLAIN to show in this case. Some other\nsolution is probably needed, but I didn't have time to dig further\ntoday.\n\n\n", "msg_date": "Tue, 24 Mar 2020 16:04:56 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Mon, Mar 23, 2020 at 01:50:59PM -0300, Alvaro Herrera wrote:\n> While messing with EXPLAIN on a query emitted by pg_dump, I noticed that\n> current Postgres 10 emits weird bucket/batch/memory values for certain\n> hash nodes:\n> \n> -> Hash (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n> -> Function Scan on unnest init_1 (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n> \n> It shows normal values in 9.6.\n\nYour message wasn't totally clear, but this is a live bug on 13dev.\n\nIt's actually broken on 9.6, but the issue isn't exposed until commit\n6f236e1eb: \"psql: Add tab completion for logical replication\",\n..which adds a nondefault ACL.\n\nI reproduced the problem with this recipe, which doesn't depend on\nc.relispartion or pg_get_partkeydef, and everything else shifting underfoot..\n\n|CREATE TABLE t (i int); REVOKE ALL ON t FROM pryzbyj; explain analyze SELECT (SELECT 1 FROM (SELECT * FROM unnest(c.relacl)AS acl WHERE NOT EXISTS ( SELECT 1 FROM unnest(c.relacl) AS init(init_acl) WHERE acl=init_acl)) as foo) AS relacl , EXISTS (SELECT 1 FROM pg_depend WHERE objid=c.oid) FROM pg_class c ORDER BY c.oid;\n| Index Scan using pg_class_oid_index on pg_class c (cost=0.27..4704.25 rows=333 width=9) (actual time=16.257..28.054 rows=334 loops=1)\n| SubPlan 1\n| -> Hash Anti Join (cost=2.25..3.63 rows=1 width=4) (actual time=0.024..0.024 rows=0 loops=334)\n| Hash Cond: (acl.acl = init.init_acl)\n| -> Function Scan on unnest acl (cost=0.00..1.00 rows=100 width=12) (actual time=0.007..0.007 rows=1 loops=334)\n| -> Hash (cost=1.00..1.00 rows=100 width=12) (actual time=0.015..0.015 rows=2 loops=179)\n| Buckets: 2139062143 Batches: 2139062143 Memory Usage: 8971876904722400kB\n| -> Function Scan on unnest init (cost=0.00..1.00 rows=100 width=12) (actual time=0.009..0.010 rows=2 loops=179)\n| SubPlan 2\n| -> Seq Scan on pg_depend (cost=0.00..144.21 rows=14 width=0) (never executed)\n| Filter: (objid = c.oid)\n| SubPlan 3\n| -> Seq Scan on pg_depend pg_depend_1 (cost=0.00..126.17 rows=7217 width=4) (actual time=0.035..6.270 rows=7220 loops=1)\n\nWhen I finally gave up on thinking I knew what branch was broken, I got:\n\n|3fc6e2d7f5b652b417fa6937c34de2438d60fa9f is the first bad commit\n|commit 3fc6e2d7f5b652b417fa6937c34de2438d60fa9f\n|Author: Tom Lane <tgl@sss.pgh.pa.us>\n|Date: Mon Mar 7 15:58:22 2016 -0500\n|\n| Make the upper part of the planner work by generating and comparing Paths.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 24 Mar 2020 01:23:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com>\nwrote:\n\n> On Tue, Mar 24, 2020 at 9:55 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > > While messing with EXPLAIN on a query emitted by pg_dump, I noticed\n> that\n> > > > current Postgres 10 emits weird bucket/batch/memory values for\n> certain\n> > > > hash nodes:\n> > >\n> > > > -> Hash (cost=0.11..0.11 rows=10\n> width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> > > > Buckets: 2139062143 Batches:\n> 2139062143 Memory Usage: 8971876904722400kB\n> > > > -> Function Scan on unnest init_1\n> (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n> > >\n> > > Looks suspiciously like uninitialized memory ...\n> >\n> > I think \"hashtable\" might have been pfree'd before\n> > ExecHashGetInstrumentation() ran, because those numbers look like\n> > CLOBBER_FREED_MEMORY's pattern:\n> >\n> > >>> hex(2139062143)\n> > '0x7f7f7f7f'\n> > >>> hex(8971876904722400 / 1024)\n> > '0x7f7f7f7f7f7'\n> >\n> > Maybe there is something wrong with the shutdown order of nested\n> subplans.\n>\n> I think there might be a case like this:\n>\n> * ExecRescanHashJoin() decides it can't reuse the hash table for a\n> rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\n> hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n> * the HashState node still has a reference to the pfree'd HashJoinTable!\n> * HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so\n> it doesn't bother to build a new hash table\n> * EXPLAIN examines the HashState's pointer to a freed HashJoinTable struct\n>\n\nYes, debugging with gdb shows this is exactly what happens.\n\nThanks\nRichard\n\nOn Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:On Tue, Mar 24, 2020 at 9:55 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Mar 24, 2020 at 6:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > > While messing with EXPLAIN on a query emitted by pg_dump, I noticed that\n> > > current Postgres 10 emits weird bucket/batch/memory values for certain\n> > > hash nodes:\n> >\n> > >                          ->  Hash  (cost=0.11..0.11 rows=10 width=12) (actual time=0.002..0.002 rows=1 loops=8)\n> > >                                Buckets: 2139062143  Batches: 2139062143  Memory Usage: 8971876904722400kB\n> > >                                ->  Function Scan on unnest init_1  (cost=0.01..0.11 rows=10 width=12) (actual time=0.001..0.001 rows=1 loops=8)\n> >\n> > Looks suspiciously like uninitialized memory ...\n>\n> I think \"hashtable\" might have been pfree'd before\n> ExecHashGetInstrumentation() ran, because those numbers look like\n> CLOBBER_FREED_MEMORY's pattern:\n>\n> >>> hex(2139062143)\n> '0x7f7f7f7f'\n> >>> hex(8971876904722400 / 1024)\n> '0x7f7f7f7f7f7'\n>\n> Maybe there is something wrong with the shutdown order of nested subplans.\n\nI think there might be a case like this:\n\n* ExecRescanHashJoin() decides it can't reuse the hash table for a\nrescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\nhj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n* the HashState node still has a reference to the pfree'd HashJoinTable!\n* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so\nit doesn't bother to build a new hash table\n* EXPLAIN examines the HashState's pointer to a freed HashJoinTable structYes, debugging with gdb shows this is exactly what happens.ThanksRichard", "msg_date": "Tue, 24 Mar 2020 15:36:34 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Tue, Mar 24, 2020 at 3:36 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n>\n>>\n>> I think there might be a case like this:\n>>\n>> * ExecRescanHashJoin() decides it can't reuse the hash table for a\n>> rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\n>> hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n>> * the HashState node still has a reference to the pfree'd HashJoinTable!\n>> * HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so\n>> it doesn't bother to build a new hash table\n>> * EXPLAIN examines the HashState's pointer to a freed HashJoinTable struct\n>>\n>\n> Yes, debugging with gdb shows this is exactly what happens.\n>\n\nAccording to the scenario above, here is a recipe that reproduces this\nissue.\n\n-- recipe start\ncreate table a(i int, j int);\ncreate table b(i int, j int);\ncreate table c(i int, j int);\n\ninsert into a select 3,3;\ninsert into a select 2,2;\ninsert into a select 1,1;\n\ninsert into b select 3,3;\n\ninsert into c select 0,0;\n\nanalyze a;\nanalyze b;\nanalyze c;\n\nset enable_nestloop to off;\nset enable_mergejoin to off;\n\nexplain analyze\nselect exists(select * from b join c on a.i > c.i and a.i = b.i and b.j =\nc.j) from a;\n-- recipe end\n\nI tried this recipe on different PostgreSQL versions, starting from\ncurrent master and going backwards. I was able to reproduce this issue\non all versions above 8.4. In 8.4 version, we do not output information\non hash buckets/batches. But manual inspection with gdb shows in 8.4 we\nalso have the dangling pointer for HashState->hashtable. I didn't check\nversions below 8.4 though.\n\nThanks\nRichard\n\nOn Tue, Mar 24, 2020 at 3:36 PM Richard Guo <guofenglinux@gmail.com> wrote:On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\nI think there might be a case like this:\n\n* ExecRescanHashJoin() decides it can't reuse the hash table for a\nrescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\nhj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n* the HashState node still has a reference to the pfree'd HashJoinTable!\n* HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation case so\nit doesn't bother to build a new hash table\n* EXPLAIN examines the HashState's pointer to a freed HashJoinTable structYes, debugging with gdb shows this is exactly what happens.According to the scenario above, here is a recipe that reproduces thisissue.-- recipe startcreate table a(i int, j int);create table b(i int, j int);create table c(i int, j int);insert into a select 3,3;insert into a select 2,2;insert into a select 1,1;insert into b select 3,3;insert into c select 0,0;analyze a;analyze b;analyze c;set enable_nestloop to off;set enable_mergejoin to off;explain analyzeselect exists(select * from b join c on a.i > c.i and a.i = b.i and b.j = c.j) from a;-- recipe endI tried this recipe on different PostgreSQL versions, starting fromcurrent master and going backwards. I was able to reproduce this issueon all versions above 8.4. In 8.4 version, we do not output informationon hash buckets/batches. But manual inspection with gdb shows in 8.4 wealso have the dangling pointer for HashState->hashtable. I didn't checkversions below 8.4 though.ThanksRichard", "msg_date": "Wed, 25 Mar 2020 18:36:17 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On 25.03.2020 13:36, Richard Guo wrote:\n>\n> On Tue, Mar 24, 2020 at 3:36 PM Richard Guo <guofenglinux@gmail.com \n> <mailto:guofenglinux@gmail.com>> wrote:\n>\n> On Tue, Mar 24, 2020 at 11:05 AM Thomas Munro\n> <thomas.munro@gmail.com <mailto:thomas.munro@gmail.com>> wrote:\n>\n>\n> I think there might be a case like this:\n>\n> * ExecRescanHashJoin() decides it can't reuse the hash table for a\n> rescan, so it calls ExecHashTableDestroy(), clears HashJoinState's\n> hj_HashTable and sets hj_JoinState to HJ_BUILD_HASHTABLE\n> * the HashState node still has a reference to the pfree'd\n> HashJoinTable!\n> * HJ_BUILD_HASHTABLE case reaches the empty-outer optimisation\n> case so\n> it doesn't bother to build a new hash table\n> * EXPLAIN examines the HashState's pointer to a freed\n> HashJoinTable struct\n>\n>\n> Yes, debugging with gdb shows this is exactly what happens.\n>\n>\n> According to the scenario above, here is a recipe that reproduces this\n> issue.\n>\n> -- recipe start\n> create table a(i int, j int);\n> create table b(i int, j int);\n> create table c(i int, j int);\n>\n> insert into a select 3,3;\n> insert into a select 2,2;\n> insert into a select 1,1;\n>\n> insert into b select 3,3;\n>\n> insert into c select 0,0;\n>\n> analyze a;\n> analyze b;\n> analyze c;\n>\n> set enable_nestloop to off;\n> set enable_mergejoin to off;\n>\n> explain analyze\n> select exists(select * from b join c on a.i > c.i and a.i = b.i and \n> b.j = c.j) from a;\n> -- recipe end\n>\n> I tried this recipe on different PostgreSQL versions, starting from\n> current master and going backwards. I was able to reproduce this issue\n> on all versions above 8.4. In 8.4 version, we do not output information\n> on hash buckets/batches. But manual inspection with gdb shows in 8.4 we\n> also have the dangling pointer for HashState->hashtable. I didn't check\n> versions below 8.4 though.\n>\n> Thanks\n> Richard\n\nI can propose the following patch for the problem.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 25 Mar 2020 17:29:59 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> On 25.03.2020 13:36, Richard Guo wrote:\n>> I tried this recipe on different PostgreSQL versions, starting from\n>> current master and going backwards. I was able to reproduce this issue\n>> on all versions above 8.4. In 8.4 version, we do not output information\n>> on hash buckets/batches. But manual inspection with gdb shows in 8.4 we\n>> also have the dangling pointer for HashState->hashtable. I didn't check\n>> versions below 8.4 though.\n\n> I can propose the following patch for the problem.\n\nI looked at this patch a bit, and I don't think it goes far enough.\nWhat this issue is really pointing out is that EXPLAIN is not considering\nthe possibility of a Hash node having had several hashtable instantiations\nover its lifespan. I propose what we do about that is generalize the\npolicy that show_hash_info() is already implementing (in a rather half\nbaked way) for multiple workers, and report the maximum field values\nacross all instantiations. We can combine the code needed to do so\nwith the code for the parallelism case, as shown in the 0001 patch\nbelow.\n\nIn principle we could probably get away with back-patching 0001,\nat least into branches that already have the HashState.hinstrument\npointer. I'm not sure it's worth any risk though. A much simpler\nfix is to make sure we clear the dangling hashtable pointer, as in\n0002 below (a simplified form of Konstantin's patch). The net\neffect of that is that in the case where a hash table is destroyed\nand never rebuilt, EXPLAIN ANALYZE would report no hash stats,\nrather than possibly-garbage stats like it does today. That's\nprobably good enough, because it should be an uncommon corner case.\n\nThoughts?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 10 Apr 2020 16:11:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Sat, Apr 11, 2020 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> > On 25.03.2020 13:36, Richard Guo wrote:\n> >> I tried this recipe on different PostgreSQL versions, starting from\n> >> current master and going backwards. I was able to reproduce this issue\n> >> on all versions above 8.4. In 8.4 version, we do not output information\n> >> on hash buckets/batches. But manual inspection with gdb shows in 8.4 we\n> >> also have the dangling pointer for HashState->hashtable. I didn't check\n> >> versions below 8.4 though.\n>\n> > I can propose the following patch for the problem.\n>\n> I looked at this patch a bit, and I don't think it goes far enough.\n> What this issue is really pointing out is that EXPLAIN is not considering\n> the possibility of a Hash node having had several hashtable instantiations\n> over its lifespan. I propose what we do about that is generalize the\n> policy that show_hash_info() is already implementing (in a rather half\n> baked way) for multiple workers, and report the maximum field values\n> across all instantiations. We can combine the code needed to do so\n> with the code for the parallelism case, as shown in the 0001 patch\n> below.\n>\n\nI looked through 0001 patch and it looks good to me.\n\nAt first I was wondering if we need to check whether HashState.hashtable\nis not NULL in ExecShutdownHash() before we decide to allocate save\nspace for HashState.hinstrument. And then I convinced myself that that's\nnot necessary since HashState.hinstrument and HashState.hashtable cannot\nbe both NULL there.\n\n\n>\n> In principle we could probably get away with back-patching 0001,\n> at least into branches that already have the HashState.hinstrument\n> pointer. I'm not sure it's worth any risk though. A much simpler\n> fix is to make sure we clear the dangling hashtable pointer, as in\n> 0002 below (a simplified form of Konstantin's patch). The net\n> effect of that is that in the case where a hash table is destroyed\n> and never rebuilt, EXPLAIN ANALYZE would report no hash stats,\n> rather than possibly-garbage stats like it does today. That's\n> probably good enough, because it should be an uncommon corner case.\n>\n\nYes it's an uncommon corner case. But I think it may still surprise\npeople that most of the time the hash stat shows well but sometimes it\ndoes not.\n\nThanks\nRichard\n\nOn Sat, Apr 11, 2020 at 4:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Konstantin Knizhnik <k.knizhnik@postgrespro.ru> writes:\n> On 25.03.2020 13:36, Richard Guo wrote:\n>> I tried this recipe on different PostgreSQL versions, starting from\n>> current master and going backwards. I was able to reproduce this issue\n>> on all versions above 8.4. In 8.4 version, we do not output information\n>> on hash buckets/batches. But manual inspection with gdb shows in 8.4 we\n>> also have the dangling pointer for HashState->hashtable. I didn't check\n>> versions below 8.4 though.\n\n> I can propose the following patch for the problem.\n\nI looked at this patch a bit, and I don't think it goes far enough.\nWhat this issue is really pointing out is that EXPLAIN is not considering\nthe possibility of a Hash node having had several hashtable instantiations\nover its lifespan.  I propose what we do about that is generalize the\npolicy that show_hash_info() is already implementing (in a rather half\nbaked way) for multiple workers, and report the maximum field values\nacross all instantiations.  We can combine the code needed to do so\nwith the code for the parallelism case, as shown in the 0001 patch\nbelow.I looked through 0001 patch and it looks good to me.At first I was wondering if we need to check whether HashState.hashtableis not NULL in ExecShutdownHash() before we decide to allocate savespace for HashState.hinstrument. And then I convinced myself that that'snot necessary since HashState.hinstrument and HashState.hashtable cannotbe both NULL there. \n\nIn principle we could probably get away with back-patching 0001,\nat least into branches that already have the HashState.hinstrument\npointer.  I'm not sure it's worth any risk though.  A much simpler\nfix is to make sure we clear the dangling hashtable pointer, as in\n0002 below (a simplified form of Konstantin's patch).  The net\neffect of that is that in the case where a hash table is destroyed\nand never rebuilt, EXPLAIN ANALYZE would report no hash stats,\nrather than possibly-garbage stats like it does today.  That's\nprobably good enough, because it should be an uncommon corner case.Yes it's an uncommon corner case. But I think it may still surprisepeople that most of the time the hash stat shows well but sometimes itdoes not.ThanksRichard", "msg_date": "Mon, 13 Apr 2020 17:07:41 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "Richard Guo <guofenglinux@gmail.com> writes:\n> At first I was wondering if we need to check whether HashState.hashtable\n> is not NULL in ExecShutdownHash() before we decide to allocate save\n> space for HashState.hinstrument. And then I convinced myself that that's\n> not necessary since HashState.hinstrument and HashState.hashtable cannot\n> be both NULL there.\n\nEven if the hashtable is null at that point, creating an all-zeroes\nhinstrument struct is harmless.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 09:53:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Mon, Apr 13, 2020 at 9:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Richard Guo <guofenglinux@gmail.com> writes:\n> > At first I was wondering if we need to check whether HashState.hashtable\n> > is not NULL in ExecShutdownHash() before we decide to allocate save\n> > space for HashState.hinstrument. And then I convinced myself that that's\n> > not necessary since HashState.hinstrument and HashState.hashtable cannot\n> > be both NULL there.\n>\n> Even if the hashtable is null at that point, creating an all-zeroes\n> hinstrument struct is harmless.\n>\n\nCorrect. The only benefit we may get from checking if the hashtable is\nnull is to avoid an unnecessary palloc0 for hinstrument. But that case\ncannot happen though.\n\nThanks\nRichard\n\nOn Mon, Apr 13, 2020 at 9:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Richard Guo <guofenglinux@gmail.com> writes:\n> At first I was wondering if we need to check whether HashState.hashtable\n> is not NULL in ExecShutdownHash() before we decide to allocate save\n> space for HashState.hinstrument. And then I convinced myself that that's\n> not necessary since HashState.hinstrument and HashState.hashtable cannot\n> be both NULL there.\n\nEven if the hashtable is null at that point, creating an all-zeroes\nhinstrument struct is harmless.Correct. The only benefit we may get from checking if the hashtable isnull is to avoid an unnecessary palloc0 for hinstrument. But that casecannot happen though.ThanksRichard", "msg_date": "Tue, 14 Apr 2020 09:50:39 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Fri, Apr 10, 2020 at 04:11:27PM -0400, Tom Lane wrote:\n> I'm not sure it's worth any risk though. A much simpler\n> fix is to make sure we clear the dangling hashtable pointer, as in\n> 0002 below (a simplified form of Konstantin's patch). The net\n> effect of that is that in the case where a hash table is destroyed\n> and never rebuilt, EXPLAIN ANALYZE would report no hash stats,\n> rather than possibly-garbage stats like it does today. That's\n> probably good enough, because it should be an uncommon corner case.\n> \n> Thoughts?\n\nChecking if you're planning to backpatch this ?\n\n> diff --git a/src/backend/executor/nodeHashjoin.c b/src/backend/executor/nodeHashjoin.c\n> index c901a80..9e28ddd 100644\n> --- a/src/backend/executor/nodeHashjoin.c\n> +++ b/src/backend/executor/nodeHashjoin.c\n> @@ -1336,6 +1336,12 @@ ExecReScanHashJoin(HashJoinState *node)\n> \t\telse\n> \t\t{\n> \t\t\t/* must destroy and rebuild hash table */\n> +\t\t\tHashState *hashNode = castNode(HashState, innerPlanState(node));\n> +\n> +\t\t\t/* for safety, be sure to clear child plan node's pointer too */\n> +\t\t\tAssert(hashNode->hashtable == node->hj_HashTable);\n> +\t\t\thashNode->hashtable = NULL;\n> +\n> \t\t\tExecHashTableDestroy(node->hj_HashTable);\n> \t\t\tnode->hj_HashTable = NULL;\n> \t\t\tnode->hj_JoinState = HJ_BUILD_HASHTABLE;\n\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Apr 2020 11:18:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Checking if you're planning to backpatch this ?\n\nAre you speaking of 5c27bce7f et al?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 27 Apr 2020 12:26:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" }, { "msg_contents": "On Mon, Apr 27, 2020 at 12:26:03PM -0400, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > Checking if you're planning to backpatch this ?\n> \n> Are you speaking of 5c27bce7f et al?\n\nOops, yes, thanks.\n\nI updated wiki/PostgreSQL_13_Open_Items just now.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 27 Apr 2020 11:29:58 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: weird hash plan cost, starting with pg10" } ]
[ { "msg_contents": "This is my *first* attempt to submit a Postgres patch, please let me know if I missed any process or format of the patch (I used this link https://wiki.postgresql.org/wiki/Working_with_Git<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FWorking_with_Git&data=02%7C01%7CTejeswar.Mupparti%40microsoft.com%7C4c16d7b057724947546608d7cf5c9fe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637205869073084246&sdata=WWsvd8bxTCk%2FUTs9JHdCHZJ77vIl1hs2z2wN075Kh3s%3D&reserved=0> As reference)\n\n\n\nThe original bug reporting-email and the relevant discussion is here\n\n\n\nhttps://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2F20191207001232.klidxnm756wqxvwx%2540alap3.anarazel.de&data=02%7C01%7CTejeswar.Mupparti%40microsoft.com%7C4c16d7b057724947546608d7cf5c9fe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637205869073104237&sdata=eP5sZxAH5%2FI86Vs8MRADM1OyIUhyAEJFMQ7vF6hnl%2Bs%3D&reserved=0>\n\nhttps://www.postgresql.org/message-id/822113470.250068.1573246011818%40connect.xfinity.com<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2F822113470.250068.1573246011818%2540connect.xfinity.com&data=02%7C01%7CTejeswar.Mupparti%40microsoft.com%7C4c16d7b057724947546608d7cf5c9fe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637205869073094244&sdata=wBIKVDydp8%2FW0zxd8%2F5nwiB77QnF8qW8I705%2BWAvaB8%3D&reserved=0>\n\nhttps://www.postgresql.org/message-id/20191206230640.2dvdjpcgn46q3ks2%40alap3.anarazel.de<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2F20191206230640.2dvdjpcgn46q3ks2%2540alap3.anarazel.de&data=02%7C01%7CTejeswar.Mupparti%40microsoft.com%7C4c16d7b057724947546608d7cf5c9fe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637205869073094244&sdata=pQQlFEa5Deu%2B2BhAFmQTyeyOJJC%2FeBeJOXhCxnYNDt8%3D&reserved=0>\n\nhttps://www.postgresql.org/message-id/1880.1281020817@sss.pgh.pa.us<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.postgresql.org%2Fmessage-id%2F1880.1281020817%2540sss.pgh.pa.us&data=02%7C01%7CTejeswar.Mupparti%40microsoft.com%7C4c16d7b057724947546608d7cf5c9fe0%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637205869073104237&sdata=lcKA8GJNtNxMqlGKC851hIBplqx00DlsPY3Wdr%2F9iP8%3D&reserved=0>\n\n\n\nThe crux of the fix is, in the current code, engine drops the buffer and then truncates the file, but a crash before the truncate and after the buffer-drop is causing the corruption. Patch reverses the order i.e. truncate the file and drop the buffer later.\n\n\n\nWarm regards,\n\nTeja", "msg_date": "Mon, 23 Mar 2020 20:56:59 +0000", "msg_from": "Teja Mupparti <tejeswarm@hotmail.com>", "msg_from_op": true, "msg_subject": "Corruption during WAL replay" }, { "msg_contents": "Thanks for working on this.\n\nAt Mon, 23 Mar 2020 20:56:59 +0000, Teja Mupparti <tejeswarm@hotmail.com> wrote in \n> This is my *first* attempt to submit a Postgres patch, please let me know if I missed any process or format of the patch \n\nWelcome! The format looks fine to me. It would be better if it had a\ncommit message that explains what the patch does. (in the format that\ngit format-patch emits.)\n\n> The original bug reporting-email and the relevant discussion is here\n...\n> The crux of the fix is, in the current code, engine drops the buffer and then truncates the file, but a crash before the truncate and after the buffer-drop is causing the corruption. Patch reverses the order i.e. truncate the file and drop the buffer later.\n\nBufferAlloc doesn't wait for the BM_IO_IN_PROGRESS for a valid buffer.\n\nI'm not sure it's acceptable to remember all to-be-deleted buffers\nwhile truncation.\n\n+\t /*START_CRIT_SECTION();*/\n\nIs this a point of argument? It is not needed if we choose the\nstrategy (c) in [1], since the protocol is aiming to allow server to\ncontinue running after truncation failure.\n\n[1]: https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n\nHowever, note that md truncates a \"file\" a non-atomic way. mdtruncate\ntruncates multiple files from the last segment toward the\nbeginning. If mdtruncate successfully truncated the first several\nsegments then failed, retaining all buffers triggers assertion failure\nin mdwrite while buffer flush.\n\n\nSome typos found:\n\n+\t * a backround task might flush them to the disk right after we\n\ns/backround/background/\n\n+ *\t\tsaved list of buffers that were marked as BM_IO_IN_PRGRESS just\ns/BM_IO_IN_PRGRESS/BM_IO_IN_PROGRESS/\n\n+ * as BM_IO_IN_PROGRES. Though the buffers are marked for IO, they\ns/BM_IO_IN_PROGRES/BM_IO_IN_PROGRESS/\n\n+ * being dicarded).\ns/dicarded/discarded/\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 24 Mar 2020 18:18:12 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2020-03-24 18:18:12 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 23 Mar 2020 20:56:59 +0000, Teja Mupparti <tejeswarm@hotmail.com> wrote in \n> > The original bug reporting-email and the relevant discussion is here\n> ...\n> > The crux of the fix is, in the current code, engine drops the buffer and then truncates the file, but a crash before the truncate and after the buffer-drop is causing the corruption. Patch reverses the order i.e. truncate the file and drop the buffer later.\n> \n> BufferAlloc doesn't wait for the BM_IO_IN_PROGRESS for a valid buffer.\n\nI don't think that's true. For any of this to be relevant the buffer has\nto be dirty. In which case BufferAlloc() has to call\nFlushBuffer(). Which in turn does a WaitIO() if BM_IO_IN_PROGRESS is\nset.\n\nWhat path are you thinking of? Or alternatively, what am I missing?\n\n\n> I'm not sure it's acceptable to remember all to-be-deleted buffers\n> while truncation.\n\nI don't see a real problem with it. Nor really a good alternative. Note\nthat for autovacuum truncations we'll only truncate a limited number of\nbuffers at once, and for most relation truncations we don't enter this\npath (since we create a new relfilenode instead).\n\n\n> \n> +\t /*START_CRIT_SECTION();*/\n\n> Is this a point of argument? It is not needed if we choose the\n> strategy (c) in [1], since the protocol is aiming to allow server to\n> continue running after truncation failure.\n> \n> [1]: https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n\nI think it's entirely broken to continue running after a truncation\nfailure. We obviously have to first WAL log the truncation (since\notherwise we can crash just after doing the truncation). But we cannot\njust continue running after WAL logging, but not performing the\nassociated action: The most obvious reason is that otherwise a replica\nwill execute the trunction, but the primary will not.\n\nThe whole justification for that behaviour \"It would turn a usually\nharmless failure to truncate, that might spell trouble at WAL replay,\ninto a certain PANIC.\" was always dubious (since on-disk and in-memory\nstate now can diverge), but it's clearly wrong once replication had\nentered the picture. There's just no alternative to a critical section\nhere.\n\n\nIf we are really concerned with truncation failing - I don't know why we\nwould be, we accept that we have to be able to modify files etc to stay\nup - we can add a pre-check ensuring that permissions are set up\nappropriately to allow us to truncate.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 16:31:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Mon, 30 Mar 2020 16:31:59 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-03-24 18:18:12 +0900, Kyotaro Horiguchi wrote:\n> > At Mon, 23 Mar 2020 20:56:59 +0000, Teja Mupparti <tejeswarm@hotmail.com> wrote in \n> > > The original bug reporting-email and the relevant discussion is here\n> > ...\n> > > The crux of the fix is, in the current code, engine drops the buffer and then truncates the file, but a crash before the truncate and after the buffer-drop is causing the corruption. Patch reverses the order i.e. truncate the file and drop the buffer later.\n> > \n> > BufferAlloc doesn't wait for the BM_IO_IN_PROGRESS for a valid buffer.\n> \n> I don't think that's true. For any of this to be relevant the buffer has\n> to be dirty. In which case BufferAlloc() has to call\n> FlushBuffer(). Which in turn does a WaitIO() if BM_IO_IN_PROGRESS is\n> set.\n> \n> What path are you thinking of? Or alternatively, what am I missing?\n\n# I would be wrong with far low odds..\n\n\"doesn't\" is overstated. Is there a case where the buffer is already\nflushed by checkpoint? (If that is the case, dropping clean buffers at\nmarking truncate would work?)\n\n> > I'm not sure it's acceptable to remember all to-be-deleted buffers\n> > while truncation.\n> \n> I don't see a real problem with it. Nor really a good alternative. Note\n> that for autovacuum truncations we'll only truncate a limited number of\n> buffers at once, and for most relation truncations we don't enter this\n> path (since we create a new relfilenode instead).\n\nThank you for the opinion. I agree to that.\n\n> > +\t /*START_CRIT_SECTION();*/\n> \n> > Is this a point of argument? It is not needed if we choose the\n> > strategy (c) in [1], since the protocol is aiming to allow server to\n> > continue running after truncation failure.\n> > \n> > [1]: https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n> \n> I think it's entirely broken to continue running after a truncation\n> failure. We obviously have to first WAL log the truncation (since\n> otherwise we can crash just after doing the truncation). But we cannot\n> just continue running after WAL logging, but not performing the\n> associated action: The most obvious reason is that otherwise a replica\n> will execute the trunction, but the primary will not.\n\nHmm. If we allow PANIC on truncation failure why do we need to go on\nthe complicated steps? Wouldn't it enough to enclose the sequence\n(WAL insert - drop buffers - truncate) in a critical section? I\nbelieved that this project aims to fix the db-breakage on truncation\nfailure by allowing rollback on truncation failure?\n\n> The whole justification for that behaviour \"It would turn a usually\n> harmless failure to truncate, that might spell trouble at WAL replay,\n> into a certain PANIC.\" was always dubious (since on-disk and in-memory\n> state now can diverge), but it's clearly wrong once replication had\n> entered the picture. There's just no alternative to a critical section\n> here.\n\nYeah, I like that direction.\n\n> If we are really concerned with truncation failing - I don't know why we\n> would be, we accept that we have to be able to modify files etc to stay\n> up - we can add a pre-check ensuring that permissions are set up\n> appropriately to allow us to truncate.\n\nI think the question above is the core part of the problem.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:36:39 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Thanks Andres and Kyotaro for the quick review. I have fixed the typos and also included the critical section (emulated it with try-catch block since palloc()s are causing issues in the truncate code). This time I used git format-patch.\n\nRegards\nTeja\n\n\n\n________________________________\nFrom: Andres Freund <andres@anarazel.de>\nSent: Monday, March 30, 2020 4:31 PM\nTo: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nCc: tejeswarm@hotmail.com <tejeswarm@hotmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; hexexpert@comcast.net <hexexpert@comcast.net>\nSubject: Re: Corruption during WAL replay\n\nHi,\n\nOn 2020-03-24 18:18:12 +0900, Kyotaro Horiguchi wrote:\n> At Mon, 23 Mar 2020 20:56:59 +0000, Teja Mupparti <tejeswarm@hotmail.com> wrote in\n> > The original bug reporting-email and the relevant discussion is here\n> ...\n> > The crux of the fix is, in the current code, engine drops the buffer and then truncates the file, but a crash before the truncate and after the buffer-drop is causing the corruption. Patch reverses the order i.e. truncate the file and drop the buffer later.\n>\n> BufferAlloc doesn't wait for the BM_IO_IN_PROGRESS for a valid buffer.\n\nI don't think that's true. For any of this to be relevant the buffer has\nto be dirty. In which case BufferAlloc() has to call\nFlushBuffer(). Which in turn does a WaitIO() if BM_IO_IN_PROGRESS is\nset.\n\nWhat path are you thinking of? Or alternatively, what am I missing?\n\n\n> I'm not sure it's acceptable to remember all to-be-deleted buffers\n> while truncation.\n\nI don't see a real problem with it. Nor really a good alternative. Note\nthat for autovacuum truncations we'll only truncate a limited number of\nbuffers at once, and for most relation truncations we don't enter this\npath (since we create a new relfilenode instead).\n\n\n>\n> + /*START_CRIT_SECTION();*/\n\n> Is this a point of argument? It is not needed if we choose the\n> strategy (c) in [1], since the protocol is aiming to allow server to\n> continue running after truncation failure.\n>\n> [1]: https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n\nI think it's entirely broken to continue running after a truncation\nfailure. We obviously have to first WAL log the truncation (since\notherwise we can crash just after doing the truncation). But we cannot\njust continue running after WAL logging, but not performing the\nassociated action: The most obvious reason is that otherwise a replica\nwill execute the trunction, but the primary will not.\n\nThe whole justification for that behaviour \"It would turn a usually\nharmless failure to truncate, that might spell trouble at WAL replay,\ninto a certain PANIC.\" was always dubious (since on-disk and in-memory\nstate now can diverge), but it's clearly wrong once replication had\nentered the picture. There's just no alternative to a critical section\nhere.\n\n\nIf we are really concerned with truncation failing - I don't know why we\nwould be, we accept that we have to be able to modify files etc to stay\nup - we can add a pre-check ensuring that permissions are set up\nappropriately to allow us to truncate.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 10 Apr 2020 23:59:58 +0000", "msg_from": "Teja Mupparti <tejeswarm@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On 2020-Mar-30, Andres Freund wrote:\n\n> If we are really concerned with truncation failing - I don't know why we\n> would be, we accept that we have to be able to modify files etc to stay\n> up - we can add a pre-check ensuring that permissions are set up\n> appropriately to allow us to truncate.\n\nI remember I saw a case where the datadir was NFS or some other network\nfilesystem thingy, and it lost connection just before autovacuum\ntruncation, or something like that -- so there was no permission\nfailure, but the truncate failed and yet PG soldiered on. I think the\nconnection was re-established soon thereafter and things went back to\n\"normal\", with nobody realizing that a truncate had been lost.\nCorruption was discovered a long time afterwards IIRC (weeks or months,\nI don't remember).\n\nI didn't review Teja's patch carefully, but the idea of panicking on\nfailure (causing WAL replay) seems better than the current behavior.\nI'd rather put the server to wait until storage is really back.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Apr 2020 20:49:05 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2020-04-10 20:49:05 -0400, Alvaro Herrera wrote:\n> On 2020-Mar-30, Andres Freund wrote:\n> \n> > If we are really concerned with truncation failing - I don't know why we\n> > would be, we accept that we have to be able to modify files etc to stay\n> > up - we can add a pre-check ensuring that permissions are set up\n> > appropriately to allow us to truncate.\n> \n> I remember I saw a case where the datadir was NFS or some other network\n> filesystem thingy, and it lost connection just before autovacuum\n> truncation, or something like that -- so there was no permission\n> failure, but the truncate failed and yet PG soldiered on. I think the\n> connection was re-established soon thereafter and things went back to\n> \"normal\", with nobody realizing that a truncate had been lost.\n> Corruption was discovered a long time afterwards IIRC (weeks or months,\n> I don't remember).\n\nYea. In that case we're in a really bad state. Because we truncate after\nthrowing away the old buffer contents (even if dirty), we'll later read\npage contents \"from the past\". Which won't end well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 10 Apr 2020 17:54:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n>\n> Thanks Andres and Kyotaro for the quick review. I have fixed the typos and also included the critical section (emulated it with try-catch block since palloc()s are causing issues in the truncate code). This time I used git format-patch.\n>\n\nI briefly looked at the latest patch but I'm not sure it's the right\nthing here to use PG_TRY/PG_CATCH to report the PANIC error. For\nexample, with the following code you changed, we will always end up\nwith emitting a PANIC \"failed to truncate the relation\" regardless of\nthe actual cause of the error.\n\n+ PG_CATCH();\n+ {\n+ ereport(PANIC, (errcode(ERRCODE_INTERNAL_ERROR),\n+ errmsg(\"failed to truncate the relation\")));\n+ }\n+ PG_END_TRY();\n\nAnd the comments of RelationTruncate() mentions:\n\n/*\n * We WAL-log the truncation before actually truncating, which means\n * trouble if the truncation fails. If we then crash, the WAL replay\n * likely isn't going to succeed in the truncation either, and cause a\n * PANIC. It's tempting to put a critical section here, but that cure\n * would be worse than the disease. It would turn a usually harmless\n * failure to truncate, that might spell trouble at WAL replay, into a\n * certain PANIC.\n */\n\nAs a second idea, I wonder if we can defer truncation until commit\ntime like smgrDoPendingDeletes mechanism. The sequence would be:\n\nAt RelationTruncate(),\n1. WAL logging.\n2. Remember buffers to be dropped.\n\nAt CommitTransaction(),\n3. Revisit the remembered buffers to check if the buffer still has\ntable data that needs to be truncated.\n4-a, If it has, we mark it as IO_IN_PROGRESS.\n4-b, If it already has different table data, ignore it.\n5, Truncate physical files.\n6, Mark the buffer we marked at #4-a as invalid.\n\nIf an error occurs between #3 and #6 or in abort case, we revert all\nIO_IN_PROGRESS flags on the buffers.\n\nIn the above idea, remembering all buffers having to-be-truncated\ntable at RelationTruncate(), we reduce the time for checking buffers\nat the commit time. Since we acquire AccessExclusiveLock the number of\nbuffers having to-be-truncated table's data never increases. A\ndownside would be that since we can truncate multiple relations we\nneed to remember all buffers of each truncated relations, which is up\nto (sizeof(int) * NBuffers) in total.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 15:24:55 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2020-04-13 15:24:55 +0900, Masahiko Sawada wrote:\n> On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n> >\n> > Thanks Andres and Kyotaro for the quick review. I have fixed the typos and also included the critical section (emulated it with try-catch block since palloc()s are causing issues in the truncate code). This time I used git format-patch.\n> >\n> \n> I briefly looked at the latest patch but I'm not sure it's the right\n> thing here to use PG_TRY/PG_CATCH to report the PANIC error. For\n> example, with the following code you changed, we will always end up\n> with emitting a PANIC \"failed to truncate the relation\" regardless of\n> the actual cause of the error.\n> \n> + PG_CATCH();\n> + {\n> + ereport(PANIC, (errcode(ERRCODE_INTERNAL_ERROR),\n> + errmsg(\"failed to truncate the relation\")));\n> + }\n> + PG_END_TRY();\n> \n> And the comments of RelationTruncate() mentions:\n\nI think that's just a workaround for mdtruncate not being usable in\ncritical sections.\n\n\n> /*\n> * We WAL-log the truncation before actually truncating, which means\n> * trouble if the truncation fails. If we then crash, the WAL replay\n> * likely isn't going to succeed in the truncation either, and cause a\n> * PANIC. It's tempting to put a critical section here, but that cure\n> * would be worse than the disease. It would turn a usually harmless\n> * failure to truncate, that might spell trouble at WAL replay, into a\n> * certain PANIC.\n> */\n\nYea, but that reasoning is just plain *wrong*. It's *never* ok to WAL\nlog something and then not perform the action. This leads to to primary\n/ replica getting out of sync, crash recovery potentially not completing\n(because of records referencing the should-be-truncated pages), ...\n\n\n> As a second idea, I wonder if we can defer truncation until commit\n> time like smgrDoPendingDeletes mechanism. The sequence would be:\n\nThis is mostly an issue during [auto]vacuum partially truncating the end\nof the file. We intentionally release the AEL regularly to allow other\naccesses to continue.\n\nFor transactional truncations we don't go down this path (as we create a\nnew relfilenode).\n\n\n> At RelationTruncate(),\n> 1. WAL logging.\n> 2. Remember buffers to be dropped.\n\nYou definitely cannot do that, as explained above.\n\n\n> At CommitTransaction(),\n> 3. Revisit the remembered buffers to check if the buffer still has\n> table data that needs to be truncated.\n> 4-a, If it has, we mark it as IO_IN_PROGRESS.\n> 4-b, If it already has different table data, ignore it.\n> 5, Truncate physical files.\n> 6, Mark the buffer we marked at #4-a as invalid.\n> \n> If an error occurs between #3 and #6 or in abort case, we revert all\n> IO_IN_PROGRESS flags on the buffers.\n\nWhat would this help with? If we still need the more complicated\ntruncation sequence?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Apr 2020 01:40:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Mon, 13 Apr 2020 at 17:40, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-04-13 15:24:55 +0900, Masahiko Sawada wrote:\n> > On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n> > >\n> > > Thanks Andres and Kyotaro for the quick review. I have fixed the typos and also included the critical section (emulated it with try-catch block since palloc()s are causing issues in the truncate code). This time I used git format-patch.\n> > >\n> >\n> > I briefly looked at the latest patch but I'm not sure it's the right\n> > thing here to use PG_TRY/PG_CATCH to report the PANIC error. For\n> > example, with the following code you changed, we will always end up\n> > with emitting a PANIC \"failed to truncate the relation\" regardless of\n> > the actual cause of the error.\n> >\n> > + PG_CATCH();\n> > + {\n> > + ereport(PANIC, (errcode(ERRCODE_INTERNAL_ERROR),\n> > + errmsg(\"failed to truncate the relation\")));\n> > + }\n> > + PG_END_TRY();\n> >\n> > And the comments of RelationTruncate() mentions:\n>\n> I think that's just a workaround for mdtruncate not being usable in\n> critical sections.\n>\n>\n> > /*\n> > * We WAL-log the truncation before actually truncating, which means\n> > * trouble if the truncation fails. If we then crash, the WAL replay\n> > * likely isn't going to succeed in the truncation either, and cause a\n> > * PANIC. It's tempting to put a critical section here, but that cure\n> > * would be worse than the disease. It would turn a usually harmless\n> > * failure to truncate, that might spell trouble at WAL replay, into a\n> > * certain PANIC.\n> > */\n>\n> Yea, but that reasoning is just plain *wrong*. It's *never* ok to WAL\n> log something and then not perform the action. This leads to to primary\n> / replica getting out of sync, crash recovery potentially not completing\n> (because of records referencing the should-be-truncated pages), ...\n>\n>\n> > As a second idea, I wonder if we can defer truncation until commit\n> > time like smgrDoPendingDeletes mechanism. The sequence would be:\n>\n> This is mostly an issue during [auto]vacuum partially truncating the end\n> of the file. We intentionally release the AEL regularly to allow other\n> accesses to continue.\n>\n> For transactional truncations we don't go down this path (as we create a\n> new relfilenode).\n>\n>\n> > At RelationTruncate(),\n> > 1. WAL logging.\n> > 2. Remember buffers to be dropped.\n>\n> You definitely cannot do that, as explained above.\n\nAh yes, you're right.\n\nSo it seems to me currently what we can do for this issue would be to\nenclose the truncation operation in a critical section. IIUC it's not\nenough just to reverse the order of dropping buffers and physical file\ntruncation because it cannot solve the problem of inconsistency on the\nstandby. And as Horiguchi-san mentioned, there is no need to reverse\nthat order if we envelop the truncation operation by a critical\nsection because we can recover page changes during crash recovery. The\nstrategy of writing out all dirty buffers before dropping buffers,\nproposed as (a) in [1], also seems not enough.\n\nRegards,\n\n[1] https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.deDoing\nsync before truncation\n\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 13 Apr 2020 18:53:26 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Mon, 13 Apr 2020 18:53:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Mon, 13 Apr 2020 at 17:40, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-04-13 15:24:55 +0900, Masahiko Sawada wrote:\n> > > On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n> > > /*\n> > > * We WAL-log the truncation before actually truncating, which means\n> > > * trouble if the truncation fails. If we then crash, the WAL replay\n> > > * likely isn't going to succeed in the truncation either, and cause a\n> > > * PANIC. It's tempting to put a critical section here, but that cure\n> > > * would be worse than the disease. It would turn a usually harmless\n> > > * failure to truncate, that might spell trouble at WAL replay, into a\n> > > * certain PANIC.\n> > > */\n> >\n> > Yea, but that reasoning is just plain *wrong*. It's *never* ok to WAL\n> > log something and then not perform the action. This leads to to primary\n> > / replica getting out of sync, crash recovery potentially not completing\n> > (because of records referencing the should-be-truncated pages), ...\n\nIt is introduced in 2008 by 3396000684, for 8.4. So it can be said as\nan overlook when introducing log-shipping.\n\nThe reason other operations like INSERTs (that extends the underlying\nfile) are \"safe\" after an extension failure is the following\noperations are performed in shared buffers as if the new page exists,\nthen tries to extend the file again. So if we continue working after\ntruncation failure, we need to disguise on shared buffers as if the\ntruncated pages are gone. But we don't have a room for another flag\nin buffer header. For example, BM_DIRTY && !BM_VALID might be able to\nbe used as the state that the page should have been truncated but not\nsucceeded yet, but I'm not sure.\n\nAnyway, I think the prognosis of a truncation failure is far hopeless\nthan extension failure in most cases and I doubt that it's good to\nintroduce such a complex feature only to overcome such a hopeless\nsituation.\n\nIn short, I think we should PANIC in that case.\n\n> > > As a second idea, I wonder if we can defer truncation until commit\n> > > time like smgrDoPendingDeletes mechanism. The sequence would be:\n> >\n> > This is mostly an issue during [auto]vacuum partially truncating the end\n> > of the file. We intentionally release the AEL regularly to allow other\n> > accesses to continue.\n> >\n> > For transactional truncations we don't go down this path (as we create a\n> > new relfilenode).\n> >\n> >\n> > > At RelationTruncate(),\n> > > 1. WAL logging.\n> > > 2. Remember buffers to be dropped.\n> >\n> > You definitely cannot do that, as explained above.\n> \n> Ah yes, you're right.\n> \n> So it seems to me currently what we can do for this issue would be to\n> enclose the truncation operation in a critical section. IIUC it's not\n> enough just to reverse the order of dropping buffers and physical file\n> truncation because it cannot solve the problem of inconsistency on the\n> standby. And as Horiguchi-san mentioned, there is no need to reverse\n> that order if we envelop the truncation operation by a critical\n> section because we can recover page changes during crash recovery. The\n> strategy of writing out all dirty buffers before dropping buffers,\n> proposed as (a) in [1], also seems not enough.\n\nAgreed. Since it's not acceptable ether WAL-logging->not-performing\nnor performing->WAL-logging, there's no other way than working as if\ntruncation is succeeded (and try again) even if it actually\nfailed. But it would be too complex.\n\nJust making it a critical section seems the right thing here.\n\n\n> [1] https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n> Doing sync before truncation\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Apr 2020 11:35:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Thanks Kyotaro and Masahiko for the feedback. I think there is a consensus on the critical-section around truncate, but I just want to emphasize the need for reversing the order of the dropping the buffers and the truncation.\n\n Repro details (when full page write = off)\n\n 1) Page on disk has empty LP 1, Insert into page LP 1\n 2) checkpoint START (Recovery REDO eventually starts here)\n 3) Delete all rows on the page (page is empty now)\n 4) Autovacuum kicks in and truncates the pages\n DropRelFileNodeBuffers - Dirty page NOT written, LP 1 on disk still empty\n 5) Checkpoint completes\n 6) Crash\n 7) smgrtruncate - Not reached (this is where we do the physical truncate)\n\n Now the crash-recovery starts\n\n Delete-log-replay (above step-3) reads page with empty LP 1 and the delete fails with PANIC (old page on disk with no insert)\n\nDoing recovery, truncate is even not reached, a WAL replay of the truncation will happen in the future but the recovery fails (repeatedly) even before reaching that point.\n\nBest regards,\nTeja\n\n________________________________\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nSent: Monday, April 13, 2020 7:35 PM\nTo: masahiko.sawada@2ndquadrant.com <masahiko.sawada@2ndquadrant.com>\nCc: andres@anarazel.de <andres@anarazel.de>; tejeswarm@hotmail.com <tejeswarm@hotmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; hexexpert@comcast.net <hexexpert@comcast.net>\nSubject: Re: Corruption during WAL replay\n\nAt Mon, 13 Apr 2020 18:53:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> On Mon, 13 Apr 2020 at 17:40, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-04-13 15:24:55 +0900, Masahiko Sawada wrote:\n> > > On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n> > > /*\n> > > * We WAL-log the truncation before actually truncating, which means\n> > > * trouble if the truncation fails. If we then crash, the WAL replay\n> > > * likely isn't going to succeed in the truncation either, and cause a\n> > > * PANIC. It's tempting to put a critical section here, but that cure\n> > > * would be worse than the disease. It would turn a usually harmless\n> > > * failure to truncate, that might spell trouble at WAL replay, into a\n> > > * certain PANIC.\n> > > */\n> >\n> > Yea, but that reasoning is just plain *wrong*. It's *never* ok to WAL\n> > log something and then not perform the action. This leads to to primary\n> > / replica getting out of sync, crash recovery potentially not completing\n> > (because of records referencing the should-be-truncated pages), ...\n\nIt is introduced in 2008 by 3396000684, for 8.4. So it can be said as\nan overlook when introducing log-shipping.\n\nThe reason other operations like INSERTs (that extends the underlying\nfile) are \"safe\" after an extension failure is the following\noperations are performed in shared buffers as if the new page exists,\nthen tries to extend the file again. So if we continue working after\ntruncation failure, we need to disguise on shared buffers as if the\ntruncated pages are gone. But we don't have a room for another flag\nin buffer header. For example, BM_DIRTY && !BM_VALID might be able to\nbe used as the state that the page should have been truncated but not\nsucceeded yet, but I'm not sure.\n\nAnyway, I think the prognosis of a truncation failure is far hopeless\nthan extension failure in most cases and I doubt that it's good to\nintroduce such a complex feature only to overcome such a hopeless\nsituation.\n\nIn short, I think we should PANIC in that case.\n\n> > > As a second idea, I wonder if we can defer truncation until commit\n> > > time like smgrDoPendingDeletes mechanism. The sequence would be:\n> >\n> > This is mostly an issue during [auto]vacuum partially truncating the end\n> > of the file. We intentionally release the AEL regularly to allow other\n> > accesses to continue.\n> >\n> > For transactional truncations we don't go down this path (as we create a\n> > new relfilenode).\n> >\n> >\n> > > At RelationTruncate(),\n> > > 1. WAL logging.\n> > > 2. Remember buffers to be dropped.\n> >\n> > You definitely cannot do that, as explained above.\n>\n> Ah yes, you're right.\n>\n> So it seems to me currently what we can do for this issue would be to\n> enclose the truncation operation in a critical section. IIUC it's not\n> enough just to reverse the order of dropping buffers and physical file\n> truncation because it cannot solve the problem of inconsistency on the\n> standby. And as Horiguchi-san mentioned, there is no need to reverse\n> that order if we envelop the truncation operation by a critical\n> section because we can recover page changes during crash recovery. The\n> strategy of writing out all dirty buffers before dropping buffers,\n> proposed as (a) in [1], also seems not enough.\n\nAgreed. Since it's not acceptable ether WAL-logging->not-performing\nnor performing->WAL-logging, there's no other way than working as if\ntruncation is succeeded (and try again) even if it actually\nfailed. But it would be too complex.\n\nJust making it a critical section seems the right thing here.\n\n\n> [1] https://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n> Doing sync before truncation\n\nregards.\n\n--\nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n\n\n\n\n\n\nThanks Kyotaro and Masahiko for the feedback. I think there is a consensus on the critical-section around truncate, but I just want to emphasize the need for reversing the order of the dropping the buffers and the truncation.\n\n\n\n\n Repro details (when full page write = off)\n\n\n\n         1) Page on disk has empty LP 1, Insert into page LP 1\n\n         2) checkpoint START (Recovery REDO eventually starts here)\n\n         3) Delete all rows on the page (page is empty now)\n\n         4) Autovacuum kicks in and truncates the pages\n\n                 DropRelFileNodeBuffers - Dirty page NOT written, LP 1 on disk still empty\n\n         5) Checkpoint completes\n\n         6) Crash\n\n         7) smgrtruncate - Not reached (this is where we do the physical truncate)\n\n\n\n Now the crash-recovery starts\n\n\n\n         Delete-log-replay (above step-3) reads page with empty LP 1 and the delete fails with PANIC (old page on disk with no insert)\n\n\n\n\n\n\nDoing recovery, truncate is even not reached, a WAL replay of the truncation will happen in the future but the recovery fails (repeatedly) even before reaching that point.\n\n\n\n\nBest regards,\n\nTeja\n\n\n\n\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nSent: Monday, April 13, 2020 7:35 PM\nTo: masahiko.sawada@2ndquadrant.com <masahiko.sawada@2ndquadrant.com>\nCc: andres@anarazel.de <andres@anarazel.de>; tejeswarm@hotmail.com <tejeswarm@hotmail.com>; pgsql-hackers@postgresql.org <pgsql-hackers@postgresql.org>; hexexpert@comcast.net <hexexpert@comcast.net>\nSubject: Re: Corruption during WAL replay\n \n\n\nAt Mon, 13 Apr 2020 18:53:26 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n\n> On Mon, 13 Apr 2020 at 17:40, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2020-04-13 15:24:55 +0900, Masahiko Sawada wrote:\n> > > On Sat, 11 Apr 2020 at 09:00, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n> > > /*\n> > >  * We WAL-log the truncation before actually truncating, which means\n> > >  * trouble if the truncation fails. If we then crash, the WAL replay\n> > >  * likely isn't going to succeed in the truncation either, and cause a\n> > >  * PANIC. It's tempting to put a critical section here, but that cure\n> > >  * would be worse than the disease. It would turn a usually harmless\n> > >  * failure to truncate, that might spell trouble at WAL replay, into a\n> > >  * certain PANIC.\n> > >  */\n> >\n> > Yea, but that reasoning is just plain *wrong*. It's *never* ok to WAL\n> > log something and then not perform the action. This leads to to primary\n> > / replica getting out of sync, crash recovery potentially not completing\n> > (because of records referencing the should-be-truncated pages), ...\n\nIt is introduced in 2008 by 3396000684, for 8.4.  So it can be said as\nan overlook when introducing log-shipping.\n\nThe reason other operations like INSERTs (that extends the underlying\nfile) are \"safe\" after an extension failure is the following\noperations are performed in shared buffers as if the new page exists,\nthen tries to extend the file again.  So if we continue working after\ntruncation failure, we need to disguise on shared buffers as if the\ntruncated pages are gone.  But we don't have a room for another flag\nin buffer header.  For example, BM_DIRTY && !BM_VALID might be able to\nbe used as the state that the page should have been truncated but not\nsucceeded yet, but I'm not sure.\n\nAnyway, I think the prognosis of a truncation failure is far hopeless\nthan extension failure in most cases and I doubt that it's good to\nintroduce such a complex feature only to overcome such a hopeless\nsituation.\n\nIn short, I think we should PANIC in that case.\n\n> > > As a second idea, I wonder if we can defer truncation until commit\n> > > time like smgrDoPendingDeletes mechanism. The sequence would be:\n> >\n> > This is mostly an issue during [auto]vacuum partially truncating the end\n> > of the file. We intentionally release the AEL regularly to allow other\n> > accesses to continue.\n> >\n> > For transactional truncations we don't go down this path (as we create a\n> > new relfilenode).\n> >\n> >\n> > > At RelationTruncate(),\n> > > 1. WAL logging.\n> > > 2. Remember buffers to be dropped.\n> >\n> > You definitely cannot do that, as explained above.\n> \n> Ah yes, you're right.\n> \n> So it seems to me currently what we can do for this issue would be to\n> enclose the truncation operation in a critical section. IIUC it's not\n> enough just to reverse the order of dropping buffers and physical file\n> truncation because it cannot solve the problem of inconsistency on the\n> standby. And as Horiguchi-san mentioned, there is no need to reverse\n> that order if we envelop the truncation operation by a critical\n> section because we can recover page changes during crash recovery. The\n> strategy of writing out all dirty buffers before dropping buffers,\n> proposed as (a) in [1], also seems not enough.\n\nAgreed.  Since it's not acceptable ether WAL-logging->not-performing\nnor performing->WAL-logging, there's no other way than working as if\ntruncation is succeeded (and try again) even if it actually\nfailed. But it would be too complex.\n\nJust making it a critical section seems the right thing here.\n\n\n> [1] \nhttps://www.postgresql.org/message-id/20191207001232.klidxnm756wqxvwx%40alap3.anarazel.de\n> Doing sync before truncation\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Tue, 14 Apr 2020 19:04:07 +0000", "msg_from": "Teja Mupparti <tejeswarm@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Wed, 15 Apr 2020 at 04:04, Teja Mupparti <tejeswarm@hotmail.com> wrote:\n>\n> Thanks Kyotaro and Masahiko for the feedback. I think there is a consensus on the critical-section around truncate, but I just want to emphasize the need for reversing the order of the dropping the buffers and the truncation.\n>\n> Repro details (when full page write = off)\n>\n> 1) Page on disk has empty LP 1, Insert into page LP 1\n> 2) checkpoint START (Recovery REDO eventually starts here)\n> 3) Delete all rows on the page (page is empty now)\n> 4) Autovacuum kicks in and truncates the pages\n> DropRelFileNodeBuffers - Dirty page NOT written, LP 1 on disk still empty\n> 5) Checkpoint completes\n> 6) Crash\n> 7) smgrtruncate - Not reached (this is where we do the physical truncate)\n>\n> Now the crash-recovery starts\n>\n> Delete-log-replay (above step-3) reads page with empty LP 1 and the delete fails with PANIC (old page on disk with no insert)\n>\n\nI agree that when replaying the deletion of (3) the page LP 1 is\nempty, but does that replay really fail with PANIC? I guess that we\nrecord that page into invalid_page_tab but don't raise a PANIC in this\ncase.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 12 Jun 2020 17:20:43 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "After nearly 5 years does something like the following yet exist?\nhttps://www.postgresql.org/message-id/559D4729.9080704@postgrespro.ru\n\nI feel that it would be useful to have the following two things. One PG enhancement and one standard extension.\n\n1) An option to \"explain\" to produce a wait events profile.\npostgres=# explain (analyze, waitprofile) update pgbench_accounts set bid=bid+1 where aid < 2000000;\n...\nExecution time: 23111.231 ms\n\n62.6% BufFileRead\n50.0% CPU\n9.3% LWLock\n\nIt uses a PG timer to do this.\n\n2) An extension based function like: select pg_wait_profile(pid, nSeconds, timerFrequency) to return the same thing for an already running query. Useful if you want examine some already long running query that is taking too long.\n\nNeither of these would be doing the heavy weight pg_stat_activity but directly poll the wait event in PROC. I've already coded the EXPLAIN option.\n\nFurthermore, can't we just remove the following \"IF\" test from pgstat_report_wait_{start,end}?\nif (!pgstat_track_activities || !proc)\nreturn;\nJust do the assignment of wait_event_info always. We should use a dummy PGPROC assigned to MyProc until we assign the one in the procarray in shared memory. That way we don't need the \"!proc\" test.\nAbout the only thing I'd want to verify is whether wait_event_info is on the same cache lines as anything else having to do with snapshots.\n\nIf I recall correctly the blanks lines above I've used to make this more readable will disappear. :-(\n\n- Dan Wood\n\n\n\n\n\n\n After nearly 5 years does something like the following yet exist?\n \n\nhttps://www.postgresql.org/message-id/559D4729.9080704@postgrespro.ru\n\n\n\n\n\n I feel that it would be useful to have the following two things.  One PG enhancement and one standard extension.\n \n\n\n\n\n 1) An option to \"explain\" to produce a wait events profile.\n \n\n postgres=# explain (analyze, waitprofile) update pgbench_accounts set bid=bid+1 where aid < 2000000;\n \n\n ...\n \n\n\n Execution time: 23111.231 ms\n \n\n\n\n\n 62.6% BufFileRead\n \n\n 50.0% CPU\n   9.3% LWLock\n \n\n\n\n It uses a PG timer to do this.\n \n\n\n\n\n 2) An extension based function like: select pg_wait_profile(pid, nSeconds, timerFrequency) to return the same thing for an already running query.  Useful if you want examine some already long running query that is taking too long.\n \n\n\n\n\n Neither of these would be doing the heavy weight pg_stat_activity but directly poll the wait event in PROC.  I've already coded the EXPLAIN option.\n \n\n\n\n\n Furthermore, can't we just remove the following \"IF\" test from pgstat_report_wait_{start,end}?\n \n\n if (!pgstat_track_activities || !proc)\n     return;\n \n\n\n Just do the assignment of wait_event_info always.  We should use a dummy PGPROC assigned to MyProc until we assign the one in the procarray in shared memory.  That way we don't need the \"!proc\" test.\n \n\n About the only thing I'd want to verify is whether wait_event_info is on the same cache lines as anything else having to do with snapshots.\n \n\n\n\n\n If I recall correctly the blanks lines above I've used to make this more readable will disappear.  :-(\n \n\n\n\n\n - Dan Wood", "msg_date": "Fri, 10 Jul 2020 13:23:40 -0700 (PDT)", "msg_from": "Daniel Wood <hexexpert@comcast.net>", "msg_from_op": false, "msg_subject": "Wait profiling" }, { "msg_contents": "On 2020-Jul-10, Daniel Wood wrote:\n\n> After nearly 5 years does something like the following yet exist?\n> https://www.postgresql.org/message-id/559D4729.9080704@postgrespro.ru\n\nYes, we have pg_stat_activity.wait_events which implement pretty much\nwhat Ildus describes there.\n\n> 1) An option to \"explain\" to produce a wait events profile.\n> postgres=# explain (analyze, waitprofile) update pgbench_accounts set bid=bid+1 where aid < 2000000;\n> ...\n> Execution time: 23111.231 ms\n\nThere's an out-of-core extension, search for pg_wait_sampling. I\nhaven't tested it yet ...\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 10 Jul 2020 16:37:13 -0400", "msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Wait profiling" }, { "msg_contents": "On Fri, Jul 10, 2020 at 10:37 PM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jul-10, Daniel Wood wrote:\n>\n> > After nearly 5 years does something like the following yet exist?\n> > https://www.postgresql.org/message-id/559D4729.9080704@postgrespro.ru\n>\n> Yes, we have pg_stat_activity.wait_events which implement pretty much\n> what Ildus describes there.\n>\n> > 1) An option to \"explain\" to produce a wait events profile.\n> > postgres=# explain (analyze, waitprofile) update pgbench_accounts set bid=bid+1 where aid < 2000000;\n> > ...\n> > Execution time: 23111.231 ms\n>\n> There's an out-of-core extension, search for pg_wait_sampling. I\n> haven't tested it yet ...\n\nI use it, and I know multiple people that are also using it (or about\nto, it's currently being packaged) in production. It's working quite\nwell and is compatible with pg_stat_statements' queryid. You can see\nsome examples of dashboards that can be built on top of this extension\nat https://powa.readthedocs.io/en/latest/components/stats_extensions/pg_wait_sampling.html.\n\n\n", "msg_date": "Sat, 11 Jul 2020 13:17:14 +0200", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wait profiling" }, { "msg_contents": "On 14/04/2020 22:04, Teja Mupparti wrote:\n> Thanks Kyotaro and Masahiko for the feedback. I think there is a \n> consensus on the critical-section around truncate,\n\n+1\n\n> but I just want to emphasize the need for reversing the order of the\n> dropping the buffers and the truncation.\n> \n> �Repro details (when full page write = off)\n> \n> � � � � �1) Page on disk has empty LP 1, Insert into page LP 1\n> � � � � �2) checkpoint START (Recovery REDO eventually starts here)\n> � � � � �3) Delete all rows on the page (page is empty now)\n> � � � � �4) Autovacuum kicks in and truncates the pages\n> � � � � � � � � �DropRelFileNodeBuffers - Dirty page NOT written, LP 1 \n> on disk still empty\n> � � � � �5) Checkpoint completes\n> � � � � �6) Crash\n> � � � � �7) smgrtruncate - Not reached (this is where we do the \n> physical truncate)\n> \n> �Now the crash-recovery starts\n> \n> � � � � �Delete-log-replay (above step-3) reads page with empty LP 1 \n> and the delete fails with PANIC (old page on disk with no insert)\n> \n> Doing recovery, truncate is even not reached, a WAL replay of the \n> truncation will happen in the future but the recovery fails (repeatedly) \n> even before reaching that point.\n\nHmm. I think simply reversing the order of DropRelFileNodeBuffers() and \ntruncating the file would open a different issue:\n\n 1) Page on disk has empty LP 1, Insert into page LP 1\n 2) checkpoint START (Recovery REDO eventually starts here)\n 3) Delete all rows on the page (page is empty now)\n 4) Autovacuum kicks in and starts truncating\n 5) smgrtruncate() truncates the file\n 6) checkpoint writes out buffers for pages that were just truncated \naway, expanding the file again.\n\nYour patch had a mechanism to mark the buffers as io-in-progress before \ntruncating the file to fix that, but I'm wary of that approach. Firstly, \nit requires scanning the buffers that are dropped twice, which can take \na long time. I remember that people have already complained that \nDropRelFileNodeBuffers() is slow, when it has to scan all the buffers \nonce. More importantly, abusing the BM_IO_INPROGRESS flag for this seems \nbad. For starters, because you're not holding buffer's I/O lock, I \nbelieve the checkpointer would busy-wait on the buffers until the \ntruncation has completed. See StartBufferIO() and AbortBufferIO().\n\nPerhaps a better approach would be to prevent the checkpoint from \ncompleting, until all in-progress truncations have completed. We have a \nmechanism to wait out in-progress commits at the beginning of a \ncheckpoint, right after the redo point has been established. See \ncomments around the GetVirtualXIDsDelayingChkpt() function call in \nCreateCheckPoint(). We could have a similar mechanism to wait out the \ntruncations before *completing* a checkpoint.\n\n- Heikki\n\n\n", "msg_date": "Mon, 17 Aug 2020 14:05:37 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2020-08-17 14:05:37 +0300, Heikki Linnakangas wrote:\n> On 14/04/2020 22:04, Teja Mupparti wrote:\n> > Thanks Kyotaro and Masahiko for the feedback. I think there is a\n> > consensus on the critical-section around truncate,\n> \n> +1\n\nI'm inclined to think that we should do that independent of the far more\ncomplicated fix for other related issues.\n\n\n> > but I just want to emphasize the need for reversing the order of the\n> > dropping the buffers and the truncation.\n> > \n> > �Repro details (when full page write = off)\n> > \n> > � � � � �1) Page on disk has empty LP 1, Insert into page LP 1\n> > � � � � �2) checkpoint START (Recovery REDO eventually starts here)\n> > � � � � �3) Delete all rows on the page (page is empty now)\n> > � � � � �4) Autovacuum kicks in and truncates the pages\n> > � � � � � � � � �DropRelFileNodeBuffers - Dirty page NOT written, LP 1\n> > on disk still empty\n> > � � � � �5) Checkpoint completes\n> > � � � � �6) Crash\n> > � � � � �7) smgrtruncate - Not reached (this is where we do the\n> > physical truncate)\n> > \n> > �Now the crash-recovery starts\n> > \n> > � � � � �Delete-log-replay (above step-3) reads page with empty LP 1\n> > and the delete fails with PANIC (old page on disk with no insert)\n> > \n> > Doing recovery, truncate is even not reached, a WAL replay of the\n> > truncation will happen in the future but the recovery fails (repeatedly)\n> > even before reaching that point.\n> \n> Hmm. I think simply reversing the order of DropRelFileNodeBuffers() and\n> truncating the file would open a different issue:\n> \n> 1) Page on disk has empty LP 1, Insert into page LP 1\n> 2) checkpoint START (Recovery REDO eventually starts here)\n> 3) Delete all rows on the page (page is empty now)\n> 4) Autovacuum kicks in and starts truncating\n> 5) smgrtruncate() truncates the file\n> 6) checkpoint writes out buffers for pages that were just truncated away,\n> expanding the file again.\n> \n> Your patch had a mechanism to mark the buffers as io-in-progress before\n> truncating the file to fix that, but I'm wary of that approach. Firstly, it\n> requires scanning the buffers that are dropped twice, which can take a long\n> time.\n\nI was thinking that we'd keep track of all the buffers marked as \"in\nprogress\" that way, avoiding the second scan.\n\nIt's also worth keeping in mind that this code is really only relevant\nfor partial truncations, which don't happen at the same frequency as\ntransactional truncations.\n\n\n> I remember that people have already complained that\n> DropRelFileNodeBuffers() is slow, when it has to scan all the buffers\n> once.\n\nBut that's when dropping many relations, normally. E.g. at the end of a\nregression test.\n\n\n> More importantly, abusing the BM_IO_INPROGRESS flag for this seems\n> bad. For starters, because you're not holding buffer's I/O lock, I\n> believe the checkpointer would busy-wait on the buffers until the\n> truncation has completed. See StartBufferIO() and AbortBufferIO().\n\nI think we should apply Robert's patch that makes io locks into\ncondition variables. Then we can fairly easily have many many buffers io\nlocked. Obviously there's some issues with doing so in the back\nbranches :(\n\nI'm working on an AIO branch, and that also requires to be able to mark\nmultiple buffers as in-progress, FWIW.\n\n\n> Perhaps a better approach would be to prevent the checkpoint from\n> completing, until all in-progress truncations have completed. We have a\n> mechanism to wait out in-progress commits at the beginning of a checkpoint,\n> right after the redo point has been established. See comments around the\n> GetVirtualXIDsDelayingChkpt() function call in CreateCheckPoint(). We could\n> have a similar mechanism to wait out the truncations before *completing* a\n> checkpoint.\n\nWhat I outlined earlier *is* essentially a way to do so, by preventing\ncheckpointing from finishing the buffer scan while a dangerous state\nexists.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 17 Aug 2020 11:22:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Status update for a commitfest entry.\r\n\r\nI see quite a few unanswered questions in the thread since the last patch version was sent. So, I move it to \"Waiting on Author\".\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Fri, 30 Oct 2020 16:34:12 +0000", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Tue, Aug 18, 2020 at 3:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-08-17 14:05:37 +0300, Heikki Linnakangas wrote:\n> > On 14/04/2020 22:04, Teja Mupparti wrote:\n> > > Thanks Kyotaro and Masahiko for the feedback. I think there is a\n> > > consensus on the critical-section around truncate,\n> >\n> > +1\n>\n> I'm inclined to think that we should do that independent of the far more\n> complicated fix for other related issues.\n\n+1\n\nIf we had a critical section in RelationTruncate(), crash recovery\nwould continue failing until the situation of the underlying file is\nrecovered if a PANIC happens. The current comment in\nRelationTruncate() says it’s worse than the disease. But considering\nphysical replication, as Andres mentioned, a failure to truncate the\nfile after logging WAL is no longer a harmless failure. Also, the\ncritical section would be necessary even if we reversed the order of\ntruncation and dropping buffers and resolved the issue. So I agree to\nproceed with the patch that adds a critical section independent of\nfixing other related things discussed in this thread. If Teja seems\nnot to work on this I’ll write the patch.\n\nRegards,\n\n\n--\nMasahiko Sawada\nEnterpriseDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Fri, 6 Nov 2020 20:40:54 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On 06.11.2020 14:40, Masahiko Sawada wrote:\n>\n> So I agree to\n> proceed with the patch that adds a critical section independent of\n> fixing other related things discussed in this thread. If Teja seems\n> not to work on this I’ll write the patch.\n>\n> Regards,\n>\n>\n> --\n> Masahiko Sawada\n> EnterpriseDB: https://www.enterprisedb.com/\n>\n>\nStatus update for a commitfest entry.\n\nThe commitfest is closed now. As this entry is a bug fix, I am moving it \nto the next CF.\nAre you planning to continue working on it?\n\n-- \nAnastasia Lubennikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n", "msg_date": "Tue, 1 Dec 2020 17:58:43 +0300", "msg_from": "Anastasia Lubennikova <a.lubennikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Mon, 17 Aug 2020 11:22:15 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-08-17 14:05:37 +0300, Heikki Linnakangas wrote:\n> > On 14/04/2020 22:04, Teja Mupparti wrote:\n> > > Thanks Kyotaro and Masahiko for the feedback. I think there is a\n> > > consensus on the critical-section around truncate,\n> > \n> > +1\n> \n> I'm inclined to think that we should do that independent of the far more\n> complicated fix for other related issues.\n...\n> > Perhaps a better approach would be to prevent the checkpoint from\n> > completing, until all in-progress truncations have completed. We have a\n> > mechanism to wait out in-progress commits at the beginning of a checkpoint,\n> > right after the redo point has been established. See comments around the\n> > GetVirtualXIDsDelayingChkpt() function call in CreateCheckPoint(). We could\n> > have a similar mechanism to wait out the truncations before *completing* a\n> > checkpoint.\n> \n> What I outlined earlier *is* essentially a way to do so, by preventing\n> checkpointing from finishing the buffer scan while a dangerous state\n> exists.\n\nSeems reasonable. The attached does that. It actually works for the\ninitial case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 06 Jan 2021 17:33:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Wed, Jan 6, 2021 at 1:33 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Mon, 17 Aug 2020 11:22:15 -0700, Andres Freund <andres@anarazel.de>\n> wrote in\n> > Hi,\n> >\n> > On 2020-08-17 14:05:37 +0300, Heikki Linnakangas wrote:\n> > > On 14/04/2020 22:04, Teja Mupparti wrote:\n> > > > Thanks Kyotaro and Masahiko for the feedback. I think there is a\n> > > > consensus on the critical-section around truncate,\n> > >\n> > > +1\n> >\n> > I'm inclined to think that we should do that independent of the far more\n> > complicated fix for other related issues.\n> ...\n> > > Perhaps a better approach would be to prevent the checkpoint from\n> > > completing, until all in-progress truncations have completed. We have a\n> > > mechanism to wait out in-progress commits at the beginning of a\n> checkpoint,\n> > > right after the redo point has been established. See comments around\n> the\n> > > GetVirtualXIDsDelayingChkpt() function call in CreateCheckPoint(). We\n> could\n> > > have a similar mechanism to wait out the truncations before\n> *completing* a\n> > > checkpoint.\n> >\n> > What I outlined earlier *is* essentially a way to do so, by preventing\n> > checkpointing from finishing the buffer scan while a dangerous state\n> > exists.\n>\n> Seems reasonable. The attached does that. It actually works for the\n> initial case.\n>\n> regards.\n>\n> --\n> Kyotaro Horiguchi\n> NTT Open Source Software Center\n>\n\nThe regression is failing for this patch, do you mind look at that and send\nthe updated patch?\n\nhttps://api.cirrus-ci.com/v1/task/6313174510075904/logs/test.log\n\n...\nt/006_logical_decoding.pl ............ ok\nt/007_sync_rep.pl .................... ok\nBailout called. Further testing stopped: system pg_ctl failed\nFAILED--Further testing stopped: system pg_ctl failed\nmake[2]: *** [Makefile:19: check] Error 255\nmake[1]: *** [Makefile:49: check-recovery-recurse] Error 2\nmake: *** [GNUmakefile:71: check-world-src/test-recurse] Error 2\n...\n\n-- \nIbrar Ahmed\n\nOn Wed, Jan 6, 2021 at 1:33 PM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Mon, 17 Aug 2020 11:22:15 -0700, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2020-08-17 14:05:37 +0300, Heikki Linnakangas wrote:\n> > On 14/04/2020 22:04, Teja Mupparti wrote:\n> > > Thanks Kyotaro and Masahiko for the feedback. I think there is a\n> > > consensus on the critical-section around truncate,\n> > \n> > +1\n> \n> I'm inclined to think that we should do that independent of the far more\n> complicated fix for other related issues.\n...\n> > Perhaps a better approach would be to prevent the checkpoint from\n> > completing, until all in-progress truncations have completed. We have a\n> > mechanism to wait out in-progress commits at the beginning of a checkpoint,\n> > right after the redo point has been established. See comments around the\n> > GetVirtualXIDsDelayingChkpt() function call in CreateCheckPoint(). We could\n> > have a similar mechanism to wait out the truncations before *completing* a\n> > checkpoint.\n> \n> What I outlined earlier *is* essentially a way to do so, by preventing\n> checkpointing from finishing the buffer scan while a dangerous state\n> exists.\n\nSeems reasonable. The attached does that. It actually works for the\ninitial case.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\nThe regression is failing for this patch, do you mind look at that and send the updated patch?https://api.cirrus-ci.com/v1/task/6313174510075904/logs/test.log...t/006_logical_decoding.pl ............ okt/007_sync_rep.pl .................... okBailout called.  Further testing stopped:  system pg_ctl failedFAILED--Further testing stopped: system pg_ctl failedmake[2]: *** [Makefile:19: check] Error 255make[1]: *** [Makefile:49: check-recovery-recurse] Error 2 make: *** [GNUmakefile:71: check-world-src/test-recurse] Error 2...-- Ibrar Ahmed", "msg_date": "Thu, 4 Mar 2021 22:37:23 +0500", "msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Thu, 4 Mar 2021 22:37:23 +0500, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote in \n> The regression is failing for this patch, do you mind look at that and send\n> the updated patch?\n> \n> https://api.cirrus-ci.com/v1/task/6313174510075904/logs/test.log\n> \n> ...\n> t/006_logical_decoding.pl ............ ok\n> t/007_sync_rep.pl .................... ok\n> Bailout called. Further testing stopped: system pg_ctl failed\n> FAILED--Further testing stopped: system pg_ctl failed\n> make[2]: *** [Makefile:19: check] Error 255\n> make[1]: *** [Makefile:49: check-recovery-recurse] Error 2\n> make: *** [GNUmakefile:71: check-world-src/test-recurse] Error 2\n> ...\n\n(I regret that I sent this as .patch file..)\n\nThaks for pointing that!\n\nThe patch assumed that CHKPT_START/COMPLETE barrier are exclusively\nused each other, but MarkBufferDirtyHint which delays checkpoint start\nis called in RelationTruncate while delaying checkpoint completion.\nThat is not a strange nor harmful behavior. I changed delayChkpt to a\nbitmap integer from an enum so that both barrier are separately\ntriggered.\n\nI'm not sure this is the way to go here, though. This fixes the issue\nof a crash during RelationTruncate, but the issue of smgrtruncate\nfailure during RelationTruncate still remains (unless we treat that\nfailure as PANIC?).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex 1f9f1a1fa1..c1b0b48362 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3072,8 +3072,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyProc->delayChkpt);\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3099,7 +3099,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 80d2d20d6c..85c720491b 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -463,7 +463,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \tproc->lxid = (LocalTransactionId) xid;\n \tproc->xid = xid;\n \tAssert(proc->xmin == InvalidTransactionId);\n-\tproc->delayChkpt = false;\n+\tproc->delayChkpt = 0;\n \tproc->statusFlags = 0;\n \tproc->pid = 0;\n \tproc->backendId = InvalidBackendId;\n@@ -1109,7 +1109,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1152,7 +1153,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2198,7 +2199,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2246,7 +2248,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 4e6a3df6b8..f033e8940a 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1334,8 +1334,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyProc->delayChkpt = true;\n+\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1436,7 +1437,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyProc->delayChkpt = false;\n+\t\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 377afb8732..5f5703bd57 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9065,18 +9065,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex 7052dc245e..1edd1b67ff 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -923,7 +923,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyProc->delayChkpt);\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex cba7a9ada0..579f23c991 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -325,6 +325,16 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \n \tRelationPreTruncate(rel);\n \n+\t/*\n+\t * If the file truncation fails but the concurrent checkpoint completes\n+\t * just before that, the next crash recovery can fail due to WAL records\n+\t * inconsistent with the untruncated pages. To avoid that situation we\n+\t * delay the checkpoint completion until we confirm the truncation to be\n+\t * successful.\n+\t */\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -373,6 +383,8 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t */\n \tif (need_fsm_vacuum)\n \t\tFreeSpaceMapVacuumRange(rel, nblocks, InvalidBlockNumber);\n+\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n }\n \n /*\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex 561c212092..1c9e971b31 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3803,7 +3803,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t{\n \t\tXLogRecPtr\tlsn = InvalidXLogRecPtr;\n \t\tbool\t\tdirtied = false;\n-\t\tbool\t\tdelayChkpt = false;\n+\t\tint\t\t\tdelayChkptMask = ~0;\n \t\tuint32\t\tbuf_state;\n \n \t\t/*\n@@ -3853,7 +3853,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyProc->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkptMask = ~DELAY_CHKPT_START;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3885,8 +3887,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tbuf_state |= BM_DIRTY | BM_JUST_DIRTIED;\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n-\t\tif (delayChkpt)\n-\t\t\tMyProc->delayChkpt = false;\n+\t\tMyProc->delayChkpt &= delayChkptMask;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex 4fc6ffb917..3e6759886a 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -655,7 +655,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \n \t\tproc->lxid = InvalidLocalTransactionId;\n \t\tproc->xmin = InvalidTransactionId;\n-\t\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tproc->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\t/* must be cleared with xid/xmin: */\n@@ -694,7 +697,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, TransactionId latestXid)\n \tproc->xid = InvalidTransactionId;\n \tproc->lxid = InvalidLocalTransactionId;\n \tproc->xmin = InvalidTransactionId;\n-\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tproc->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* must be cleared with xid/xmin: */\n@@ -2955,7 +2961,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGPROC.\n+ * critical sections, as shown by having delayChkpt set to the specified value\n+ * in their PGPROC.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -2969,13 +2976,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -2987,7 +2996,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n \t\tPGPROC\t *proc = &allProcs[pgprocno];\n \n-\t\tif (proc->delayChkpt)\n+\t\tif ((proc->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -3013,12 +3022,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -3029,7 +3040,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (proc->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((proc->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 897045ee27..7915cdd484 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -394,7 +394,7 @@ InitProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyProc->statusFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -576,7 +576,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyProc->statusFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex a777cb64a1..2799debdaf 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -79,6 +79,10 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/* symbols for PGPROC.delayChkpt */\n+#define DELAY_CHKPT_START\t\t(1<<0) \n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n typedef enum\n {\n \tPROC_WAIT_STATUS_OK,\n@@ -184,7 +188,8 @@ struct PGPROC\n \tpg_atomic_uint64 waitStart; /* time at which wait for lock acquisition\n \t\t\t\t\t\t\t\t * started */\n \n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start */\n+\tint\t\t\tdelayChkpt;\t\t/* if this proc delays checkpoint start and/or\n+\t\t\t\t\t\t\t\t * completion. */\n \n \tuint8\t\tstatusFlags;\t/* this backend's status flags, see PROC_*\n \t\t\t\t\t\t\t\t * above. mirrored in\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex b01fa52139..ec40130466 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -15,11 +15,11 @@\n #define PROCARRAY_H\n \n #include \"storage/lock.h\"\n+#include \"storage/proc.h\"\n #include \"storage/standby.h\"\n #include \"utils/relcache.h\"\n #include \"utils/snapshot.h\"\n \n-\n extern Size ProcArrayShmemSize(void);\n extern void CreateSharedProcArray(void);\n extern void ProcArrayAdd(PGPROC *proc);\n@@ -59,8 +59,9 @@ extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n extern void GetReplicationHorizons(TransactionId *slot_xmin, TransactionId *catalog_xmin);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);", "msg_date": "Fri, 05 Mar 2021 12:01:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Thu, Mar 4, 2021 at 10:01 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The patch assumed that CHKPT_START/COMPLETE barrier are exclusively\n> used each other, but MarkBufferDirtyHint which delays checkpoint start\n> is called in RelationTruncate while delaying checkpoint completion.\n> That is not a strange nor harmful behavior. I changed delayChkpt to a\n> bitmap integer from an enum so that both barrier are separately\n> triggered.\n>\n> I'm not sure this is the way to go here, though. This fixes the issue\n> of a crash during RelationTruncate, but the issue of smgrtruncate\n> failure during RelationTruncate still remains (unless we treat that\n> failure as PANIC?).\n\nI like this patch. As I understand it, we're currently cheating by\nallowing checkpoints to complete without necessarily flushing all of\nthe pages that were dirty at the time we fixed the redo pointer out to\ndisk. We think this is OK because we know that those pages are going\nto get truncated away, but it's not really OK because when the system\nstarts up, it has to replay WAL starting from the checkpoint's redo\npointer, but the state of the page is not the same as it was at the\ntime when the redo pointer was the end of WAL, so redo fails. In the\ncase described in\nhttp://postgr.es/m/BYAPR06MB63739B2692DC6DBB3C5F186CABDA0@BYAPR06MB6373.namprd06.prod.outlook.com\nmodifications are made to the page before the redo pointer is fixed\nand those changes never make it to disk, but the truncation also never\nmakes it to the disk either. With this patch, that can't happen,\nbecause no checkpoint can intervene between when we (1) decide we're\nnot going to bother writing those dirty pages and (2) actually\ntruncate them away. So either the pages will get written as part of\nthe checkpoint, or else they'll be gone before the checkpoint\ncompletes. In the latter case, I suppose redo that would have modified\nthose pages will just be skipped, thus dodging the problem.\n\nIn RelationTruncate, I suggest that we ought to clear the\ndelay-checkpoint flag before rather than after calling\nFreeSpaceMapVacuumRange. Since the free space map is not fully\nWAL-logged, anything we're doing there should be non-critical. Also, I\nthink it might be better if MarkBufferDirtyHint stays closer to the\nexisting coding and just uses a Boolean and an if-test to decide\nwhether to clear the bit, instead of inventing a new mechanism. I\ndon't really see anything wrong with the new mechanism, but I think\nit's better to keep the patch minimal.\n\nAs you say, this doesn't fix the problem that truncation might fail.\nBut as Andres and Sawada-san said, the solution to that is to get rid\nof the comments saying that it's OK for truncation to fail and make it\na PANIC. However, I don't think that change needs to be part of this\npatch. Even if we do that, we still need to do this. And even if we do\nthis, we still need to do that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Aug 2021 14:14:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I like this patch.\n\nI think the basic idea is about right, but I'm not happy with the\nthree-way delayChkpt business; that seems too cute by three-quarters.\nI think two independent boolean flags, one saying \"I'm preventing\ncheckpoint start\" and one saying \"I'm preventing checkpoint completion\",\nwould be much less confusing and also more future-proof. Who's to say\nthat we won't ever need both states to be set in the same process?\n\nI also dislike the fact that the patch has made procarray.h depend\non proc.h ... maybe I'm wrong, but I thought that there was a reason\nfor keeping those independent, if indeed this hasn't actually resulted\nin a circular-includes situation. If we avoid inventing that enum type\nthen there's no need for that. If we do need an enum, maybe it could\nbe put in some already-common prerequisite header.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Sep 2021 15:37:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Sep 24, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > I like this patch.\n>\n> I think the basic idea is about right, but I'm not happy with the\n> three-way delayChkpt business; that seems too cute by three-quarters.\n> I think two independent boolean flags, one saying \"I'm preventing\n> checkpoint start\" and one saying \"I'm preventing checkpoint completion\",\n> would be much less confusing and also more future-proof. Who's to say\n> that we won't ever need both states to be set in the same process?\n\nNobody, but the version of the patch that I was looking at uses a\nseparate bit for each one:\n\n+/* symbols for PGPROC.delayChkpt */\n+#define DELAY_CHKPT_START (1<<0)\n+#define DELAY_CHKPT_COMPLETE (1<<1)\n\nOne could instead use separate Booleans, but there doesn't seem to be\nanything three-way about this?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 24 Sep 2021 16:08:44 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Sep 24, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think the basic idea is about right, but I'm not happy with the\n>> three-way delayChkpt business; that seems too cute by three-quarters.\n\n> Nobody, but the version of the patch that I was looking at uses a\n> separate bit for each one:\n\n> +/* symbols for PGPROC.delayChkpt */\n> +#define DELAY_CHKPT_START (1<<0)\n> +#define DELAY_CHKPT_COMPLETE (1<<1)\n\nHm, that's not in the patch version that the CF app claims to be\nlatest [1]. It does this:\n\n+/* type for PGPROC.delayChkpt */\n+typedef enum DelayChkptType\n+{\n+\tDELAY_CHKPT_NONE = 0,\n+\tDELAY_CHKPT_START,\n+\tDELAY_CHKPT_COMPLETE\n+} DelayChkptType;\n\nwhich seems like a distinct disimprovement over what you're quoting.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20210106.173327.1444585955309078930.horikyota.ntt@gmail.com\n\n\n", "msg_date": "Fri, 24 Sep 2021 16:22:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Thaks for looking this, Robert and Tom.\n\nAt Fri, 24 Sep 2021 16:22:28 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Sep 24, 2021 at 3:42 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think the basic idea is about right, but I'm not happy with the\n> >> three-way delayChkpt business; that seems too cute by three-quarters.\n> \n> > Nobody, but the version of the patch that I was looking at uses a\n> > separate bit for each one:\n> \n> > +/* symbols for PGPROC.delayChkpt */\n> > +#define DELAY_CHKPT_START (1<<0)\n> > +#define DELAY_CHKPT_COMPLETE (1<<1)\n> \n> Hm, that's not in the patch version that the CF app claims to be\n> latest [1]. It does this:\n> \n> +/* type for PGPROC.delayChkpt */\n> +typedef enum DelayChkptType\n> +{\n> +\tDELAY_CHKPT_NONE = 0,\n> +\tDELAY_CHKPT_START,\n> +\tDELAY_CHKPT_COMPLETE\n> +} DelayChkptType;\n> \n> which seems like a distinct disimprovement over what you're quoting.\n \nYeah, that is because the latest patch is not attached as *.patch/diff\nbut *.txt. I didn't name it as *.patch in order to avoid noise patch\nin that thread although it was too late. On the contrary that seems to\nhave lead in another trouble..\n\nTom's concern is right. Actually both the two events can happen\nsimultaneously but the latest *.patch.txt treats that case as Robert\nsaid.\n\nOne advantage of having the two flags as one bitmap integer is it\nslightly simplifies the logic in GetVirtualXIDsDelayingChkpt and\nHaveVirtualXIDsDelayingChkpt. On the other hand it very slightly\ncomplexifies how to set/reset the flags.\n\nGetVirtualXIDsDelayingChkpt:\n+\t\tif ((proc->delayChkpt & type) != 0)\n\nvs\n\n+\t\tif (delayStart)\n+\t\t\tdelayflag = proc->delayChkptStart;\n+\t\telse\n+\t\t\tdelayflag = proc->delayChkptEnd;\n+\t\tif (delayflag != 0)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Sep 2021 17:28:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Thank you for the comments! (Sorry for the late resopnse.)\n\nAt Tue, 10 Aug 2021 14:14:05 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Thu, Mar 4, 2021 at 10:01 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The patch assumed that CHKPT_START/COMPLETE barrier are exclusively\n> > used each other, but MarkBufferDirtyHint which delays checkpoint start\n> > is called in RelationTruncate while delaying checkpoint completion.\n> > That is not a strange nor harmful behavior. I changed delayChkpt to a\n> > bitmap integer from an enum so that both barrier are separately\n> > triggered.\n> >\n> > I'm not sure this is the way to go here, though. This fixes the issue\n> > of a crash during RelationTruncate, but the issue of smgrtruncate\n> > failure during RelationTruncate still remains (unless we treat that\n> > failure as PANIC?).\n> \n> I like this patch. As I understand it, we're currently cheating by\n> allowing checkpoints to complete without necessarily flushing all of\n> the pages that were dirty at the time we fixed the redo pointer out to\n> disk. We think this is OK because we know that those pages are going\n> to get truncated away, but it's not really OK because when the system\n> starts up, it has to replay WAL starting from the checkpoint's redo\n> pointer, but the state of the page is not the same as it was at the\n> time when the redo pointer was the end of WAL, so redo fails. In the\n> case described in\n> http://postgr.es/m/BYAPR06MB63739B2692DC6DBB3C5F186CABDA0@BYAPR06MB6373.namprd06.prod.outlook.com\n> modifications are made to the page before the redo pointer is fixed\n> and those changes never make it to disk, but the truncation also never\n> makes it to the disk either. With this patch, that can't happen,\n> because no checkpoint can intervene between when we (1) decide we're\n> not going to bother writing those dirty pages and (2) actually\n> truncate them away. So either the pages will get written as part of\n> the checkpoint, or else they'll be gone before the checkpoint\n> completes. In the latter case, I suppose redo that would have modified\n> those pages will just be skipped, thus dodging the problem.\n\nI think your understanding is right.\n\n> In RelationTruncate, I suggest that we ought to clear the\n> delay-checkpoint flag before rather than after calling\n> FreeSpaceMapVacuumRange. Since the free space map is not fully\n> WAL-logged, anything we're doing there should be non-critical. Also, I\n\nAgreed and fixed.\n\n> think it might be better if MarkBufferDirtyHint stays closer to the\n> existing coding and just uses a Boolean and an if-test to decide\n> whether to clear the bit, instead of inventing a new mechanism. I\n> don't really see anything wrong with the new mechanism, but I think\n> it's better to keep the patch minimal.\n\nYeah, that was a a kind of silly. Fixed.\n\n> As you say, this doesn't fix the problem that truncation might fail.\n> But as Andres and Sawada-san said, the solution to that is to get rid\n> of the comments saying that it's OK for truncation to fail and make it\n> a PANIC. However, I don't think that change needs to be part of this\n> patch. Even if we do that, we still need to do this. And even if we do\n> this, we still need to do that.\n\nOk. Addition to the aboves, I rewrote the comment in RelatinoTruncate.\n\n+\t * Delay the concurrent checkpoint's completion until this truncation\n+\t * successfully completes, so that we don't establish a redo-point between\n+\t * buffer deletion and file-truncate. Otherwise we can leave inconsistent\n+\t * file content against the WAL records after the REDO position and future\n+\t * recovery fails.\n\nHowever, a problem for me for now is that I cannot reproduce the\nproblem.\n\nTo avoid further confusion, the attached is named as *.patch.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 27 Sep 2021 17:30:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "\nOn 27.09.2021 11:30, Kyotaro Horiguchi wrote:\n> Thank you for the comments! (Sorry for the late resopnse.)\n>\n> At Tue, 10 Aug 2021 14:14:05 -0400, Robert Haas <robertmhaas@gmail.com> wrote in\n>> On Thu, Mar 4, 2021 at 10:01 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>>> The patch assumed that CHKPT_START/COMPLETE barrier are exclusively\n>>> used each other, but MarkBufferDirtyHint which delays checkpoint start\n>>> is called in RelationTruncate while delaying checkpoint completion.\n>>> That is not a strange nor harmful behavior. I changed delayChkpt to a\n>>> bitmap integer from an enum so that both barrier are separately\n>>> triggered.\n>>>\n>>> I'm not sure this is the way to go here, though. This fixes the issue\n>>> of a crash during RelationTruncate, but the issue of smgrtruncate\n>>> failure during RelationTruncate still remains (unless we treat that\n>>> failure as PANIC?).\n>> I like this patch. As I understand it, we're currently cheating by\n>> allowing checkpoints to complete without necessarily flushing all of\n>> the pages that were dirty at the time we fixed the redo pointer out to\n>> disk. We think this is OK because we know that those pages are going\n>> to get truncated away, but it's not really OK because when the system\n>> starts up, it has to replay WAL starting from the checkpoint's redo\n>> pointer, but the state of the page is not the same as it was at the\n>> time when the redo pointer was the end of WAL, so redo fails. In the\n>> case described in\n>> http://postgr.es/m/BYAPR06MB63739B2692DC6DBB3C5F186CABDA0@BYAPR06MB6373.namprd06.prod.outlook.com\n>> modifications are made to the page before the redo pointer is fixed\n>> and those changes never make it to disk, but the truncation also never\n>> makes it to the disk either. With this patch, that can't happen,\n>> because no checkpoint can intervene between when we (1) decide we're\n>> not going to bother writing those dirty pages and (2) actually\n>> truncate them away. So either the pages will get written as part of\n>> the checkpoint, or else they'll be gone before the checkpoint\n>> completes. In the latter case, I suppose redo that would have modified\n>> those pages will just be skipped, thus dodging the problem.\n> I think your understanding is right.\n>\n>> In RelationTruncate, I suggest that we ought to clear the\n>> delay-checkpoint flag before rather than after calling\n>> FreeSpaceMapVacuumRange. Since the free space map is not fully\n>> WAL-logged, anything we're doing there should be non-critical. Also, I\n> Agreed and fixed.\n>\n>> think it might be better if MarkBufferDirtyHint stays closer to the\n>> existing coding and just uses a Boolean and an if-test to decide\n>> whether to clear the bit, instead of inventing a new mechanism. I\n>> don't really see anything wrong with the new mechanism, but I think\n>> it's better to keep the patch minimal.\n> Yeah, that was a a kind of silly. Fixed.\n>\n>> As you say, this doesn't fix the problem that truncation might fail.\n>> But as Andres and Sawada-san said, the solution to that is to get rid\n>> of the comments saying that it's OK for truncation to fail and make it\n>> a PANIC. However, I don't think that change needs to be part of this\n>> patch. Even if we do that, we still need to do this. And even if we do\n>> this, we still need to do that.\n> Ok. Addition to the aboves, I rewrote the comment in RelatinoTruncate.\n>\n> +\t * Delay the concurrent checkpoint's completion until this truncation\n> +\t * successfully completes, so that we don't establish a redo-point between\n> +\t * buffer deletion and file-truncate. Otherwise we can leave inconsistent\n> +\t * file content against the WAL records after the REDO position and future\n> +\t * recovery fails.\n>\n> However, a problem for me for now is that I cannot reproduce the\n> problem.\n>\n> To avoid further confusion, the attached is named as *.patch.\n>\n> regards.\n>\nHi. This is my first attempt to review a patch so feel free to tell me \nif I missed something.\n\nAs of today's state of REL_14_STABLE \n(ef9706bbc8ce917a366e4640df8c603c9605817a), the problem is reproducible \nusing the script provided by Daniel Wood in this \n(1335373813.287510.1573611814107@connect.xfinity.com) message. Also, the \nlatest patch seems not to be applicable and requires some minor tweaks.\n\n\nRegards,\n\nDaniel Shelepanov\n\n\n\n", "msg_date": "Mon, 24 Jan 2022 23:33:20 +0300", "msg_from": "Daniel Shelepanov <deniel1495@mail.ru>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Mon, 24 Jan 2022 23:33:20 +0300, Daniel Shelepanov <deniel1495@mail.ru> wrote in \n> Hi. This is my first attempt to review a patch so feel free to tell me\n> if I missed something.\n\nWelcome!\n\n> As of today's state of REL_14_STABLE\n> (ef9706bbc8ce917a366e4640df8c603c9605817a), the problem is\n> reproducible using the script provided by Daniel Wood in this\n> (1335373813.287510.1573611814107@connect.xfinity.com) message. Also,\n> the latest patch seems not to be applicable and requires some minor\n> tweaks.\n\nThanks for the info. The reason for my failure is checksum was\nenabled.. After disalbing both fpw and checksum (and wal_log_hints)\nallows me to reproduce the issue. And what I found is:\n\nv3 patch:\n >\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n!>\tif (0 && nvxids > 0)\n >\t{\n\nUgggggggh! It looks like a debugging tweak but it prevents everything\nfrom working.\n\nThe attached is the fixed version and it surely works with the repro.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 26 Jan 2022 17:25:33 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Wed, Jan 26, 2022 at 3:25 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> The attached is the fixed version and it surely works with the repro.\n\nHi,\n\nI spent the morning working on this patch and came up with the\nattached version. I wrote substantial comments in RelationTruncate(),\nwhere I tried to make it more clear exactly what the bug is here, and\nalso in storage/proc.h, where I tried to clarify both the use of the\nDELAY_CHKPT_* flags in general terms. If nobody is too sad about this\nversion, I plan to commit it.\n\nI think it should be back-patched, too, but that looks like a bit of a\npain. I think every back-branch will require different adjustments.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 15 Mar 2022 12:44:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Tue, 15 Mar 2022 12:44:49 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Wed, Jan 26, 2022 at 3:25 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > The attached is the fixed version and it surely works with the repro.\n> \n> Hi,\n> \n> I spent the morning working on this patch and came up with the\n> attached version. I wrote substantial comments in RelationTruncate(),\n> where I tried to make it more clear exactly what the bug is here, and\n> also in storage/proc.h, where I tried to clarify both the use of the\n> DELAY_CHKPT_* flags in general terms. If nobody is too sad about this\n> version, I plan to commit it.\n\nThanks for taking this and for the time. The additional comments\nseems describing the flags more clearly.\n\nstorage.c:\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated buffers might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the buffers to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n\nFWIW, this seems like slightly confusing between buffer and its\ncontent. I can read it correctly so I don't mind if it is natural\nenough.\n\nOtherwise all the added/revised comments looks fine. Thanks for the\nlabor.\n\n> I think it should be back-patched, too, but that looks like a bit of a\n> pain. I think every back-branch will require different adjustments.\n\nI'll try that, if you are already working on it, please inform me. (It\nmay more than likely be too late..)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 16 Mar 2022 14:14:32 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Wed, Mar 16, 2022 at 1:14 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> storage.c:\n> + * Make sure that a concurrent checkpoint can't complete while truncation\n> + * is in progress.\n> + *\n> + * The truncation operation might drop buffers that the checkpoint\n> + * otherwise would have flushed. If it does, then it's essential that\n> + * the files actually get truncated on disk before the checkpoint record\n> + * is written. Otherwise, if reply begins from that checkpoint, the\n> + * to-be-truncated buffers might still exist on disk but have older\n> + * contents than expected, which can cause replay to fail. It's OK for\n> + * the buffers to not exist on disk at all, but not for them to have the\n> + * wrong contents.\n>\n> FWIW, this seems like slightly confusing between buffer and its\n> content. I can read it correctly so I don't mind if it is natural\n> enough.\n\nHmm. I think the last two instances of \"buffers\" in this comment\nshould actually say \"blocks\".\n\n> I'll try that, if you are already working on it, please inform me. (It\n> may more than likely be too late..)\n\nIf you want to take a crack at that, I'd be delighted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Mar 2022 10:14:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Wed, 16 Mar 2022 10:14:56 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> Hmm. I think the last two instances of \"buffers\" in this comment\n> should actually say \"blocks\".\n\nOk. I replaced them with \"blocks\" and it looks nicer. Thanks!\n\n> > I'll try that, if you are already working on it, please inform me. (It\n> > may more than likely be too late..)\n> \n> If you want to take a crack at that, I'd be delighted.\n\nFinally, no two of from 10 to 14 doesn't accept the same patch.\n\nAs a cross-version check, I compared all combinations of the patches\nfor two adjacent versions and confirmed that no hunks are lost.\n\nAll versions pass check world.\n\n\nThe differences between each two adjacent versions are as follows.\n\nmaster->14:\n\n A hunk fails due to the change in how to access rel->rd_smgr.\n\n14->13:\n\n Several hunks fail due to simple context differences.\n\n13->12:\n\n Many hunks fail due to the migration of delayChkpt from PGPROC to\n PGXACT and the context difference due to change of FSM trancation\n logic in RelationTruncate.\n\n12->11:\n\n Several hunks fail due to the removal of volatile qalifier from\n pointers to PGPROC/PGXACT.\n\n11-10:\n\n A hunk fails due to the context difference due to an additional\n member tempNamespaceId of PGPROC.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\nFrom 71493542cda97f75d0737e3434d9aaab2beadd5f Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 17 Mar 2022 14:54:25 +0900\nSubject: [PATCH] Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint.\nMIME-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nBack-patch to all supported versions.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n---\n src/backend/access/transam/multixact.c | 6 ++--\n src/backend/access/transam/twophase.c | 12 ++++----\n src/backend/access/transam/xact.c | 5 ++--\n src/backend/access/transam/xlog.c | 16 +++++++++--\n src/backend/access/transam/xloginsert.c | 2 +-\n src/backend/catalog/storage.c | 29 ++++++++++++++++++-\n src/backend/storage/buffer/bufmgr.c | 6 ++--\n src/backend/storage/ipc/procarray.c | 26 ++++++++++++-----\n src/backend/storage/lmgr/proc.c | 4 +--\n src/include/storage/proc.h | 37 ++++++++++++++++++++++++-\n src/include/storage/procarray.h | 5 ++--\n 11 files changed, 120 insertions(+), 28 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex b643564f16..50d8bab9e2 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3075,8 +3075,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyProc->delayChkpt);\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3102,7 +3102,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 7cc76c1db7..dea3f485f7 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -474,7 +474,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \t}\n \tproc->xid = xid;\n \tAssert(proc->xmin == InvalidTransactionId);\n-\tproc->delayChkpt = false;\n+\tproc->delayChkpt = 0;\n \tproc->statusFlags = 0;\n \tproc->pid = 0;\n \tproc->databaseId = databaseid;\n@@ -1165,7 +1165,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1208,7 +1209,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2275,7 +2276,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2323,7 +2325,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 514044f3db..c5e7261921 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1335,8 +1335,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyProc->delayChkpt = true;\n+\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1437,7 +1438,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyProc->delayChkpt = false;\n+\t\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 3e71aea71f..7cc49819f0 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9228,18 +9228,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex b153fad594..1af4a90c41 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -925,7 +925,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyProc->delayChkpt);\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex cba7a9ada0..fa5682dce8 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -325,6 +325,22 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \n \tRelationPreTruncate(rel);\n \n+\t/*\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated blocks might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the blocks to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n+\t */\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -363,13 +379,24 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t\t\tXLogFlush(lsn);\n \t}\n \n-\t/* Do the real work to truncate relation forks */\n+\t/*\n+\t * This will first remove any buffers from the buffer pool that should no\n+\t * longer exist after truncation is complete, and then truncate the\n+\t * corresponding files on disk.\n+\t */\n \tsmgrtruncate(rel->rd_smgr, forks, nforks, blocks);\n \n+\t/* We've done all the critical work, so checkpoints are OK now. */\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * Update upper-level FSM pages to account for the truncation. This is\n \t * important because the just-truncated pages were likely marked as\n \t * all-free, and would be preferentially selected.\n+\t *\n+\t * NB: There's no point in delaying checkpoints until this is done.\n+\t * Because the FSM is not WAL-logged, we have to be prepared for the\n+\t * possibility of corruption after a crash anyway.\n \t */\n \tif (need_fsm_vacuum)\n \t\tFreeSpaceMapVacuumRange(rel, nblocks, InvalidBlockNumber);\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex ffc6056c60..a55545a187 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3946,7 +3946,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyProc->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkpt = true;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3979,7 +3981,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n \t\tif (delayChkpt)\n-\t\t\tMyProc->delayChkpt = false;\n+\t\t\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex f047f9a242..ae71d7538b 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -689,7 +689,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \n \t\tproc->lxid = InvalidLocalTransactionId;\n \t\tproc->xmin = InvalidTransactionId;\n-\t\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tproc->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\t/* must be cleared with xid/xmin: */\n@@ -728,7 +731,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, TransactionId latestXid)\n \tproc->xid = InvalidTransactionId;\n \tproc->lxid = InvalidLocalTransactionId;\n \tproc->xmin = InvalidTransactionId;\n-\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tproc->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* must be cleared with xid/xmin: */\n@@ -3043,7 +3049,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGPROC.\n+ * critical sections, as shown by having specified delayChkpt bits set in their\n+ * PGPROC.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -3057,13 +3064,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -3075,7 +3084,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n \t\tPGPROC\t *proc = &allProcs[pgprocno];\n \n-\t\tif (proc->delayChkpt)\n+\t\tif ((proc->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -3101,12 +3110,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -3117,7 +3128,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (proc->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((proc->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 2575ea1ca0..c50a419a54 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -394,7 +394,7 @@ InitProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyProc->statusFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -579,7 +579,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyProc->statusFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex cfabfdbedf..b78012ec2b 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -86,6 +86,41 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/*\n+ * Flags for PGPROC.delayChkpt\n+ *\n+ * These flags can be used to delay the start or completion of a checkpoint\n+ * for short periods. A flag is in effect if the corresponding bit is set in\n+ * the PGPROC of any backend.\n+ *\n+ * For our purposes here, a checkpoint has three phases: (1) determine the\n+ * location to which the redo pointer will be moved, (2) write all the\n+ * data durably to disk, and (3) WAL-log the checkpoint.\n+ *\n+ * Setting DELAY_CHKPT_START prevents the system from moving from phase 1\n+ * to phase 2. This is useful when we are performing a WAL-logged modification\n+ * of data that will be flushed to disk in phase 2. By setting this flag\n+ * before writing WAL and clearing it after we've both written WAL and\n+ * performed the corresponding modification, we ensure that if the WAL record\n+ * is inserted prior to the new redo point, the corresponding data changes will\n+ * also be flushed to disk before the checkpoint can complete. (In the\n+ * extremely common case where the data being modified is in shared buffers\n+ * and we acquire an exclusive content lock on the relevant buffers before\n+ * writing WAL, this mechanism is not needed, because phase 2 will block\n+ * until we release the content lock and then flush the modified data to\n+ * disk.)\n+ *\n+ * Setting DELAY_CHKPT_COMPLETE prevents the system from moving from phase 2\n+ * to phase 3. This is useful if we are performing a WAL-logged operation that\n+ * might invalidate buffers, such as relation truncation. In this case, we need\n+ * to ensure that any buffers which were invalidated and thus not flushed by\n+ * the checkpoint are actaully destroyed on disk. Replay can cope with a file\n+ * or block that doesn't exist, but not with a block that has the wrong\n+ * contents.\n+ */\n+#define DELAY_CHKPT_START\t\t(1<<0)\n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n typedef enum\n {\n \tPROC_WAIT_STATUS_OK,\n@@ -191,7 +226,7 @@ struct PGPROC\n \tpg_atomic_uint64 waitStart; /* time at which wait for lock acquisition\n \t\t\t\t\t\t\t\t * started */\n \n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start */\n+\tint\t\t\tdelayChkpt;\t\t/* for DELAY_CHKPT_* flags */\n \n \tuint8\t\tstatusFlags;\t/* this backend's status flags, see PROC_*\n \t\t\t\t\t\t\t\t * above. mirrored in\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex b01fa52139..93de230a32 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -59,8 +59,9 @@ extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n extern void GetReplicationHorizons(TransactionId *slot_xmin, TransactionId *catalog_xmin);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);\n-- \n2.27.0\n\n\nFrom f1832b4aaa3fcd06777a1d3bd9e322b3d85dd634 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 17 Mar 2022 19:11:22 +0900\nSubject: [PATCH] Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint.\nMIME-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nBack-patch to all supported versions.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n---\n src/backend/access/transam/multixact.c | 6 ++--\n src/backend/access/transam/twophase.c | 12 ++++----\n src/backend/access/transam/xact.c | 5 ++--\n src/backend/access/transam/xlog.c | 16 +++++++++--\n src/backend/access/transam/xloginsert.c | 2 +-\n src/backend/catalog/storage.c | 29 ++++++++++++++++++-\n src/backend/storage/buffer/bufmgr.c | 6 ++--\n src/backend/storage/ipc/procarray.c | 26 ++++++++++++-----\n src/backend/storage/lmgr/proc.c | 4 +--\n src/include/storage/proc.h | 37 ++++++++++++++++++++++++-\n src/include/storage/procarray.h | 5 ++--\n 11 files changed, 120 insertions(+), 28 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex 7990b5e5dd..3e6443fd41 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3071,8 +3071,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyProc->delayChkpt);\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3098,7 +3098,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex b1a221849a..716c17c98f 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -476,7 +476,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \t}\n \tpgxact->xid = xid;\n \tpgxact->xmin = InvalidTransactionId;\n-\tproc->delayChkpt = false;\n+\tproc->delayChkpt = 0;\n \tpgxact->vacuumFlags = 0;\n \tproc->pid = 0;\n \tproc->databaseId = databaseid;\n@@ -1170,7 +1170,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1213,7 +1214,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2286,7 +2287,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyProc->delayChkpt = true;\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2334,7 +2336,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex fb6220e491..da6ce5a09e 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1308,8 +1308,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyProc->delayChkpt = true;\n+\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1410,7 +1411,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyProc->delayChkpt = false;\n+\t\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 7bef438d9a..9522c6531f 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9022,18 +9022,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex b21679f09e..5cff486d9e 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -904,7 +904,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyProc->delayChkpt);\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex 74216785b7..0eb14cc885 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -325,6 +325,22 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \n \tRelationPreTruncate(rel);\n \n+\t/*\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated blocks might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the blocks to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n+\t */\n+\tAssert((MyProc->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyProc->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -363,13 +379,24 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t\t\tXLogFlush(lsn);\n \t}\n \n-\t/* Do the real work to truncate relation forks */\n+\t/*\n+\t * This will first remove any buffers from the buffer pool that should no\n+\t * longer exist after truncation is complete, and then truncate the\n+\t * corresponding files on disk.\n+\t */\n \tsmgrtruncate(rel->rd_smgr, forks, nforks, blocks);\n \n+\t/* We've done all the critical work, so checkpoints are OK now. */\n+\tMyProc->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * Update upper-level FSM pages to account for the truncation. This is\n \t * important because the just-truncated pages were likely marked as\n \t * all-free, and would be preferentially selected.\n+\t *\n+\t * NB: There's no point in delaying checkpoints until this is done.\n+\t * Because the FSM is not WAL-logged, we have to be prepared for the\n+\t * possibility of corruption after a crash anyway.\n \t */\n \tif (need_fsm_vacuum)\n \t\tFreeSpaceMapVacuumRange(rel, nblocks, InvalidBlockNumber);\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex 597afedef7..033ef46811 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3647,7 +3647,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyProc->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyProc->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyProc->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkpt = true;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3680,7 +3682,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n \t\tif (delayChkpt)\n-\t\t\tMyProc->delayChkpt = false;\n+\t\t\tMyProc->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex 02b157243e..725680f34f 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -434,7 +434,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \t\tpgxact->xmin = InvalidTransactionId;\n \t\t/* must be cleared with xid/xmin: */\n \t\tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\t\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tproc->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\tAssert(pgxact->nxids == 0);\n@@ -456,7 +459,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, PGXACT *pgxact,\n \tpgxact->xmin = InvalidTransactionId;\n \t/* must be cleared with xid/xmin: */\n \tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\tproc->delayChkpt = false;\t/* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tproc->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* Clear the subtransaction-XID cache too while holding the lock */\n@@ -2272,7 +2278,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGPROC.\n+ * critical sections, as shown by having specified delayChkpt bits set in their\n+ * PGPROC.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -2286,13 +2293,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -2304,7 +2313,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tint\t\t\tpgprocno = arrayP->pgprocnos[index];\n \t\tPGPROC\t *proc = &allProcs[pgprocno];\n \n-\t\tif (proc->delayChkpt)\n+\t\tif ((proc->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -2330,12 +2339,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -2346,7 +2357,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (proc->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((proc->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 0d70b03eeb..f3a6c598bf 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -396,7 +396,7 @@ InitProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -578,7 +578,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyProc->delayChkpt = false;\n+\tMyProc->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex b3ea1a2586..5798b91186 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -83,6 +83,41 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/*\n+ * Flags for PGPROC.delayChkpt\n+ *\n+ * These flags can be used to delay the start or completion of a checkpoint\n+ * for short periods. A flag is in effect if the corresponding bit is set in\n+ * the PGPROC of any backend.\n+ *\n+ * For our purposes here, a checkpoint has three phases: (1) determine the\n+ * location to which the redo pointer will be moved, (2) write all the\n+ * data durably to disk, and (3) WAL-log the checkpoint.\n+ *\n+ * Setting DELAY_CHKPT_START prevents the system from moving from phase 1\n+ * to phase 2. This is useful when we are performing a WAL-logged modification\n+ * of data that will be flushed to disk in phase 2. By setting this flag\n+ * before writing WAL and clearing it after we've both written WAL and\n+ * performed the corresponding modification, we ensure that if the WAL record\n+ * is inserted prior to the new redo point, the corresponding data changes will\n+ * also be flushed to disk before the checkpoint can complete. (In the\n+ * extremely common case where the data being modified is in shared buffers\n+ * and we acquire an exclusive content lock on the relevant buffers before\n+ * writing WAL, this mechanism is not needed, because phase 2 will block\n+ * until we release the content lock and then flush the modified data to\n+ * disk.)\n+ *\n+ * Setting DELAY_CHKPT_COMPLETE prevents the system from moving from phase 2\n+ * to phase 3. This is useful if we are performing a WAL-logged operation that\n+ * might invalidate buffers, such as relation truncation. In this case, we need\n+ * to ensure that any buffers which were invalidated and thus not flushed by\n+ * the checkpoint are actaully destroyed on disk. Replay can cope with a file\n+ * or block that doesn't exist, but not with a block that has the wrong\n+ * contents.\n+ */\n+#define DELAY_CHKPT_START\t\t(1<<0)\n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n /*\n * Each backend has a PGPROC struct in shared memory. There is also a list of\n * currently-unused PGPROC structs that will be reallocated to new backends.\n@@ -149,7 +184,7 @@ struct PGPROC\n \tLOCKMASK\theldLocks;\t\t/* bitmask for lock types already held on this\n \t\t\t\t\t\t\t\t * lock object by this backend */\n \n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start */\n+\tint\t\t\tdelayChkpt;\t\t/* for DELAY_CHKPT_* flags */\n \n \t/*\n \t * Info to allow us to wait for synchronous replication, if needed.\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex 200ef8db27..4dee2dab10 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -92,8 +92,9 @@ extern TransactionId GetOldestXmin(Relation rel, int flags);\n extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);\n-- \n2.27.0\n\n\nFrom 3eb3c1df1fbccd7eb3dc0dcc1ed99938e5c12e44 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 17 Mar 2022 19:32:38 +0900\nSubject: [PATCH] Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint.\nMIME-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nBack-patch to all supported versions.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n---\n src/backend/access/transam/multixact.c | 6 ++--\n src/backend/access/transam/twophase.c | 12 ++++----\n src/backend/access/transam/xact.c | 5 ++--\n src/backend/access/transam/xlog.c | 16 +++++++++--\n src/backend/access/transam/xloginsert.c | 2 +-\n src/backend/catalog/storage.c | 26 ++++++++++++++++-\n src/backend/storage/buffer/bufmgr.c | 6 ++--\n src/backend/storage/ipc/procarray.c | 26 ++++++++++++-----\n src/backend/storage/lmgr/proc.c | 4 +--\n src/include/storage/proc.h | 38 +++++++++++++++++++++++--\n src/include/storage/procarray.h | 5 ++--\n 11 files changed, 117 insertions(+), 29 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex 09748905a8..757346cbbb 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3069,8 +3069,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyPgXact->delayChkpt);\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3096,7 +3096,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 6def1820ca..602ca41054 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -477,7 +477,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \t}\n \tpgxact->xid = xid;\n \tpgxact->xmin = InvalidTransactionId;\n-\tpgxact->delayChkpt = false;\n+\tpgxact->delayChkpt = 0;\n \tpgxact->vacuumFlags = 0;\n \tproc->pid = 0;\n \tproc->databaseId = databaseid;\n@@ -1187,7 +1187,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1230,7 +1231,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2337,7 +2338,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2385,7 +2387,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 9c6b87c6ec..9d23298b2b 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1306,8 +1306,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyPgXact->delayChkpt = true;\n+\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1408,7 +1409,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyPgXact->delayChkpt = false;\n+\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex a30314bc83..9135985eaf 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -8920,18 +8920,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex 24a6f3148b..b51b0edd67 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -899,7 +899,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyPgXact->delayChkpt);\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex f899b25c0e..5a6324fec4 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -29,6 +29,7 @@\n #include \"catalog/storage.h\"\n #include \"catalog/storage_xlog.h\"\n #include \"storage/freespace.h\"\n+#include \"storage/proc.h\"\n #include \"storage/smgr.h\"\n #include \"utils/memutils.h\"\n #include \"utils/rel.h\"\n@@ -252,6 +253,22 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \tif (vm)\n \t\tvisibilitymap_truncate(rel, nblocks);\n \n+\t/*\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated blocks might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the blocks to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n+\t */\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -290,8 +307,15 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t\t\tXLogFlush(lsn);\n \t}\n \n-\t/* Do the real work */\n+\t/*\n+\t * This will first remove any buffers from the buffer pool that should no\n+\t * longer exist after truncation is complete, and then truncate the\n+\t * corresponding files on disk.\n+\t */\n \tsmgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);\n+\n+\t/* We've done all the critical work, so checkpoints are OK now. */\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n }\n \n /*\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex 01c09fd532..7d11b0963f 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3514,7 +3514,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyPgXact->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkpt = true;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3547,7 +3549,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n \t\tif (delayChkpt)\n-\t\t\tMyPgXact->delayChkpt = false;\n+\t\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex ec7e210226..39093253fe 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -434,7 +434,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \t\tpgxact->xmin = InvalidTransactionId;\n \t\t/* must be cleared with xid/xmin: */\n \t\tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\t\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tpgxact->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\tAssert(pgxact->nxids == 0);\n@@ -456,7 +459,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, PGXACT *pgxact,\n \tpgxact->xmin = InvalidTransactionId;\n \t/* must be cleared with xid/xmin: */\n \tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tpgxact->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* Clear the subtransaction-XID cache too while holding the lock */\n@@ -2261,7 +2267,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGXACT.\n+ * critical sections, as shown by having specified delayChkpt bits set in their\n+ * PGXACT.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -2275,13 +2282,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -2294,7 +2303,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tPGPROC\t *proc = &allProcs[pgprocno];\n \t\tPGXACT\t *pgxact = &allPgXact[pgprocno];\n \n-\t\tif (pgxact->delayChkpt)\n+\t\tif ((pgxact->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -2320,12 +2329,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -2337,7 +2348,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (pgxact->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((pgxact->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 4850df2e14..59291e01f4 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -397,7 +397,7 @@ InitProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -579,7 +579,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex 43d0854a41..2a16fd23d4 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -76,6 +76,41 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/*\n+ * Flags for PGPROC.delayChkpt\n+ *\n+ * These flags can be used to delay the start or completion of a checkpoint\n+ * for short periods. A flag is in effect if the corresponding bit is set in\n+ * the PGPROC of any backend.\n+ *\n+ * For our purposes here, a checkpoint has three phases: (1) determine the\n+ * location to which the redo pointer will be moved, (2) write all the\n+ * data durably to disk, and (3) WAL-log the checkpoint.\n+ *\n+ * Setting DELAY_CHKPT_START prevents the system from moving from phase 1\n+ * to phase 2. This is useful when we are performing a WAL-logged modification\n+ * of data that will be flushed to disk in phase 2. By setting this flag\n+ * before writing WAL and clearing it after we've both written WAL and\n+ * performed the corresponding modification, we ensure that if the WAL record\n+ * is inserted prior to the new redo point, the corresponding data changes will\n+ * also be flushed to disk before the checkpoint can complete. (In the\n+ * extremely common case where the data being modified is in shared buffers\n+ * and we acquire an exclusive content lock on the relevant buffers before\n+ * writing WAL, this mechanism is not needed, because phase 2 will block\n+ * until we release the content lock and then flush the modified data to\n+ * disk.)\n+ *\n+ * Setting DELAY_CHKPT_COMPLETE prevents the system from moving from phase 2\n+ * to phase 3. This is useful if we are performing a WAL-logged operation that\n+ * might invalidate buffers, such as relation truncation. In this case, we need\n+ * to ensure that any buffers which were invalidated and thus not flushed by\n+ * the checkpoint are actaully destroyed on disk. Replay can cope with a file\n+ * or block that doesn't exist, but not with a block that has the wrong\n+ * contents.\n+ */\n+#define DELAY_CHKPT_START\t\t(1<<0)\n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n /*\n * Each backend has a PGPROC struct in shared memory. There is also a list of\n * currently-unused PGPROC structs that will be reallocated to new backends.\n@@ -232,8 +267,7 @@ typedef struct PGXACT\n \n \tuint8\t\tvacuumFlags;\t/* vacuum-related flags, see above */\n \tbool\t\toverflowed;\n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start;\n-\t\t\t\t\t\t\t\t * previously called InCommit */\n+\tint\t\t\tdelayChkpt;\t\t/* for DELAY_CHKPT_* flags */\n \n \tuint8\t\tnxids;\n } PGXACT;\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex d1dc0ffe28..d9ca460efc 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -92,8 +92,9 @@ extern TransactionId GetOldestXmin(Relation rel, int flags);\n extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);\n-- \n2.27.0\n\n\nFrom 30fd7eea362f38a64f62fc91123bc387dabed15f Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 17 Mar 2022 19:36:10 +0900\nSubject: [PATCH] Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint.\nMIME-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nBack-patch to all supported versions.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n---\n src/backend/access/transam/multixact.c | 6 ++--\n src/backend/access/transam/twophase.c | 12 ++++----\n src/backend/access/transam/xact.c | 5 ++--\n src/backend/access/transam/xlog.c | 16 +++++++++--\n src/backend/access/transam/xloginsert.c | 2 +-\n src/backend/catalog/storage.c | 26 ++++++++++++++++-\n src/backend/storage/buffer/bufmgr.c | 6 ++--\n src/backend/storage/ipc/procarray.c | 26 ++++++++++++-----\n src/backend/storage/lmgr/proc.c | 4 +--\n src/include/storage/proc.h | 38 +++++++++++++++++++++++--\n src/include/storage/procarray.h | 5 ++--\n 11 files changed, 117 insertions(+), 29 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex ad9e7ff8f0..5612db0e21 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3069,8 +3069,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyPgXact->delayChkpt);\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3096,7 +3096,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 8b402c3a1d..769a5fd714 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -476,7 +476,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \t}\n \tpgxact->xid = xid;\n \tpgxact->xmin = InvalidTransactionId;\n-\tpgxact->delayChkpt = false;\n+\tpgxact->delayChkpt = 0;\n \tpgxact->vacuumFlags = 0;\n \tproc->pid = 0;\n \tproc->databaseId = databaseid;\n@@ -1175,7 +1175,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1218,7 +1219,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2352,7 +2353,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2400,7 +2402,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex e32b05d17f..5a86b6575e 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1239,8 +1239,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyPgXact->delayChkpt = true;\n+\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1341,7 +1342,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyPgXact->delayChkpt = false;\n+\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex c68dc1b9a8..53e109b0aa 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9064,18 +9064,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex c033e7bd4c..a8c140b06f 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -899,7 +899,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyPgXact->delayChkpt);\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex 5df4382b7e..5d6f456c70 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -27,6 +27,7 @@\n #include \"catalog/storage.h\"\n #include \"catalog/storage_xlog.h\"\n #include \"storage/freespace.h\"\n+#include \"storage/proc.h\"\n #include \"storage/smgr.h\"\n #include \"utils/memutils.h\"\n #include \"utils/rel.h\"\n@@ -248,6 +249,22 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \tif (vm)\n \t\tvisibilitymap_truncate(rel, nblocks);\n \n+\t/*\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated blocks might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the blocks to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n+\t */\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -286,8 +303,15 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t\t\tXLogFlush(lsn);\n \t}\n \n-\t/* Do the real work */\n+\t/*\n+\t * This will first remove any buffers from the buffer pool that should no\n+\t * longer exist after truncation is complete, and then truncate the\n+\t * corresponding files on disk.\n+\t */\n \tsmgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);\n+\n+\t/* We've done all the critical work, so checkpoints are OK now. */\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n }\n \n /*\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex 459151519a..027d5067a0 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3471,7 +3471,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyPgXact->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkpt = true;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3504,7 +3506,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n \t\tif (delayChkpt)\n-\t\t\tMyPgXact->delayChkpt = false;\n+\t\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex 465ca66857..d88d955091 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -433,7 +433,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \t\tpgxact->xmin = InvalidTransactionId;\n \t\t/* must be cleared with xid/xmin: */\n \t\tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\t\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tpgxact->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\tAssert(pgxact->nxids == 0);\n@@ -455,7 +458,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, PGXACT *pgxact,\n \tpgxact->xmin = InvalidTransactionId;\n \t/* must be cleared with xid/xmin: */\n \tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tpgxact->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* Clear the subtransaction-XID cache too while holding the lock */\n@@ -2267,7 +2273,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGXACT.\n+ * critical sections, as shown by having specified delayChkpt bits set in their\n+ * PGXACT.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -2281,13 +2288,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -2300,7 +2309,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tvolatile PGPROC *proc = &allProcs[pgprocno];\n \t\tvolatile PGXACT *pgxact = &allPgXact[pgprocno];\n \n-\t\tif (pgxact->delayChkpt)\n+\t\tif ((pgxact->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -2326,12 +2335,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -2343,7 +2354,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (pgxact->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((pgxact->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 69a1e37289..aaecfa67b7 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -380,7 +380,7 @@ InitProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -562,7 +562,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->roleId = InvalidOid;\n \tMyProc->tempNamespaceId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex 95c9592b21..e76ca8a11e 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -76,6 +76,41 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/*\n+ * Flags for PGPROC.delayChkpt\n+ *\n+ * These flags can be used to delay the start or completion of a checkpoint\n+ * for short periods. A flag is in effect if the corresponding bit is set in\n+ * the PGPROC of any backend.\n+ *\n+ * For our purposes here, a checkpoint has three phases: (1) determine the\n+ * location to which the redo pointer will be moved, (2) write all the\n+ * data durably to disk, and (3) WAL-log the checkpoint.\n+ *\n+ * Setting DELAY_CHKPT_START prevents the system from moving from phase 1\n+ * to phase 2. This is useful when we are performing a WAL-logged modification\n+ * of data that will be flushed to disk in phase 2. By setting this flag\n+ * before writing WAL and clearing it after we've both written WAL and\n+ * performed the corresponding modification, we ensure that if the WAL record\n+ * is inserted prior to the new redo point, the corresponding data changes will\n+ * also be flushed to disk before the checkpoint can complete. (In the\n+ * extremely common case where the data being modified is in shared buffers\n+ * and we acquire an exclusive content lock on the relevant buffers before\n+ * writing WAL, this mechanism is not needed, because phase 2 will block\n+ * until we release the content lock and then flush the modified data to\n+ * disk.)\n+ *\n+ * Setting DELAY_CHKPT_COMPLETE prevents the system from moving from phase 2\n+ * to phase 3. This is useful if we are performing a WAL-logged operation that\n+ * might invalidate buffers, such as relation truncation. In this case, we need\n+ * to ensure that any buffers which were invalidated and thus not flushed by\n+ * the checkpoint are actaully destroyed on disk. Replay can cope with a file\n+ * or block that doesn't exist, but not with a block that has the wrong\n+ * contents.\n+ */\n+#define DELAY_CHKPT_START\t\t(1<<0)\n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n /*\n * Each backend has a PGPROC struct in shared memory. There is also a list of\n * currently-unused PGPROC structs that will be reallocated to new backends.\n@@ -232,8 +267,7 @@ typedef struct PGXACT\n \n \tuint8\t\tvacuumFlags;\t/* vacuum-related flags, see above */\n \tbool\t\toverflowed;\n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start;\n-\t\t\t\t\t\t\t\t * previously called InCommit */\n+\tint\t\t\tdelayChkpt;\t\t/* for DELAY_CHKPT_* flags */\n \n \tuint8\t\tnxids;\n } PGXACT;\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex a3a1bf724c..a69632a70c 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -92,8 +92,9 @@ extern TransactionId GetOldestXmin(Relation rel, int flags);\n extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);\n-- \n2.27.0\n\n\nFrom f0b1e3bee795a54d2a701889dd5956283fbc2cf6 Mon Sep 17 00:00:00 2001\nFrom: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDate: Thu, 17 Mar 2022 19:40:45 +0900\nSubject: [PATCH] Fix possible recovery trouble if TRUNCATE overlaps a\n checkpoint.\nMIME-Version: 1.0\nContent-Type: text/plain; charset=UTF-8\nContent-Transfer-Encoding: 8bit\n\nIf TRUNCATE causes some buffers to be invalidated and thus the\ncheckpoint does not flush them, TRUNCATE must also ensure that the\ncorresponding files are truncated on disk. Otherwise, a replay\nfrom the checkpoint might find that the buffers exist but have\nthe wrong contents, which may cause replay to fail.\n\nReport by Teja Mupparti. Patch by Kyotaro Horiguchi, per a design\nsuggestion from Heikki Linnakangas, with some changes to the\ncomments by me. Review of this and a prior patch that approached\nthe issue differently by Heikki Linnakangas, Andres Freund, Álvaro\nHerrera, Masahiko Sawada, and Tom Lane.\n\nBack-patch to all supported versions.\n\nDiscussion: http://postgr.es/m/BYAPR06MB6373BF50B469CA393C614257ABF00@BYAPR06MB6373.namprd06.prod.outlook.com\n---\n src/backend/access/transam/multixact.c | 6 ++--\n src/backend/access/transam/twophase.c | 12 ++++----\n src/backend/access/transam/xact.c | 5 ++--\n src/backend/access/transam/xlog.c | 16 +++++++++--\n src/backend/access/transam/xloginsert.c | 2 +-\n src/backend/catalog/storage.c | 26 ++++++++++++++++-\n src/backend/storage/buffer/bufmgr.c | 6 ++--\n src/backend/storage/ipc/procarray.c | 26 ++++++++++++-----\n src/backend/storage/lmgr/proc.c | 4 +--\n src/include/storage/proc.h | 38 +++++++++++++++++++++++--\n src/include/storage/procarray.h | 5 ++--\n 11 files changed, 117 insertions(+), 29 deletions(-)\n\ndiff --git a/src/backend/access/transam/multixact.c b/src/backend/access/transam/multixact.c\nindex cdaf499348..1e52972bbf 100644\n--- a/src/backend/access/transam/multixact.c\n+++ b/src/backend/access/transam/multixact.c\n@@ -3069,8 +3069,8 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t * crash/basebackup, even though the state of the data directory would\n \t * require it.\n \t */\n-\tAssert(!MyPgXact->delayChkpt);\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/* WAL log truncation */\n \tWriteMTruncateXlogRec(newOldestMultiDB,\n@@ -3096,7 +3096,7 @@ TruncateMultiXact(MultiXactId newOldestMulti, Oid newOldestMultiDB)\n \t/* Then offsets */\n \tPerformOffsetsTruncation(oldestMulti, newOldestMulti);\n \n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \tLWLockRelease(MultiXactTruncationLock);\ndiff --git a/src/backend/access/transam/twophase.c b/src/backend/access/transam/twophase.c\nindex 3eb33be69b..c61b2736a1 100644\n--- a/src/backend/access/transam/twophase.c\n+++ b/src/backend/access/transam/twophase.c\n@@ -478,7 +478,7 @@ MarkAsPreparingGuts(GlobalTransaction gxact, TransactionId xid, const char *gid,\n \t}\n \tpgxact->xid = xid;\n \tpgxact->xmin = InvalidTransactionId;\n-\tpgxact->delayChkpt = false;\n+\tpgxact->delayChkpt = 0;\n \tpgxact->vacuumFlags = 0;\n \tproc->pid = 0;\n \tproc->databaseId = databaseid;\n@@ -1159,7 +1159,8 @@ EndPrepare(GlobalTransaction gxact)\n \n \tSTART_CRIT_SECTION();\n \n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \tXLogBeginInsert();\n \tfor (record = records.head; record != NULL; record = record->next)\n@@ -1191,7 +1192,7 @@ EndPrepare(GlobalTransaction gxact)\n \t * checkpoint starting after this will certainly see the gxact as a\n \t * candidate for fsyncing.\n \t */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t/*\n \t * Remember that we have this GlobalTransaction entry locked for us. If\n@@ -2284,7 +2285,8 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tSTART_CRIT_SECTION();\n \n \t/* See notes in RecordTransactionCommit */\n-\tMyPgXact->delayChkpt = true;\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t/*\n \t * Emit the XLOG commit record. Note that we mark 2PC commits as\n@@ -2332,7 +2334,7 @@ RecordTransactionCommitPrepared(TransactionId xid,\n \tTransactionIdCommitTree(xid, nchildren, children);\n \n \t/* Checkpoint can proceed now */\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \tEND_CRIT_SECTION();\n \ndiff --git a/src/backend/access/transam/xact.c b/src/backend/access/transam/xact.c\nindex 25a3a4f97e..ccd99c38c2 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -1247,8 +1247,9 @@ RecordTransactionCommit(void)\n \t\t * This makes checkpoint's determination of which xacts are delayChkpt\n \t\t * a bit fuzzy, but it doesn't matter.\n \t\t */\n+\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n \t\tSTART_CRIT_SECTION();\n-\t\tMyPgXact->delayChkpt = true;\n+\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n \n \t\tSetCurrentTransactionStopTimestamp();\n \n@@ -1349,7 +1350,7 @@ RecordTransactionCommit(void)\n \t */\n \tif (markXidCommitted)\n \t{\n-\t\tMyPgXact->delayChkpt = false;\n+\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \t\tEND_CRIT_SECTION();\n \t}\n \ndiff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\nindex 8e8bdde764..5087b5fe0a 100644\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -9022,18 +9022,30 @@ CreateCheckPoint(int flags)\n \t * and we will correctly flush the update below. So we cannot miss any\n \t * xacts we need to wait for.\n \t */\n-\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids);\n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_START);\n \tif (nvxids > 0)\n \t{\n \t\tdo\n \t\t{\n \t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n-\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids));\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_START));\n \t}\n \tpfree(vxids);\n \n \tCheckPointGuts(checkPoint.redo, flags);\n \n+\tvxids = GetVirtualXIDsDelayingChkpt(&nvxids, DELAY_CHKPT_COMPLETE);\n+\tif (nvxids > 0)\n+\t{\n+\t\tdo\n+\t\t{\n+\t\t\tpg_usleep(10000L);\t/* wait for 10 msec */\n+\t\t} while (HaveVirtualXIDsDelayingChkpt(vxids, nvxids,\n+\t\t\t\t\t\t\t\t\t\t\t DELAY_CHKPT_COMPLETE));\n+\t}\n+\tpfree(vxids);\n+\n \t/*\n \t * Take a snapshot of running transactions and write this to WAL. This\n \t * allows us to reconstruct the state of running transactions during\ndiff --git a/src/backend/access/transam/xloginsert.c b/src/backend/access/transam/xloginsert.c\nindex 579d8de775..6ff19814d4 100644\n--- a/src/backend/access/transam/xloginsert.c\n+++ b/src/backend/access/transam/xloginsert.c\n@@ -899,7 +899,7 @@ XLogSaveBufferForHint(Buffer buffer, bool buffer_std)\n \t/*\n \t * Ensure no checkpoint can change our view of RedoRecPtr.\n \t */\n-\tAssert(MyPgXact->delayChkpt);\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) != 0);\n \n \t/*\n \t * Update RedoRecPtr so that we can make the right decision\ndiff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c\nindex 9a5fde00ca..729fb92c5f 100644\n--- a/src/backend/catalog/storage.c\n+++ b/src/backend/catalog/storage.c\n@@ -28,6 +28,7 @@\n #include \"catalog/storage.h\"\n #include \"catalog/storage_xlog.h\"\n #include \"storage/freespace.h\"\n+#include \"storage/proc.h\"\n #include \"storage/smgr.h\"\n #include \"utils/memutils.h\"\n #include \"utils/rel.h\"\n@@ -249,6 +250,22 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \tif (vm)\n \t\tvisibilitymap_truncate(rel, nblocks);\n \n+\t/*\n+\t * Make sure that a concurrent checkpoint can't complete while truncation\n+\t * is in progress.\n+\t *\n+\t * The truncation operation might drop buffers that the checkpoint\n+\t * otherwise would have flushed. If it does, then it's essential that\n+\t * the files actually get truncated on disk before the checkpoint record\n+\t * is written. Otherwise, if reply begins from that checkpoint, the\n+\t * to-be-truncated blocks might still exist on disk but have older\n+\t * contents than expected, which can cause replay to fail. It's OK for\n+\t * the blocks to not exist on disk at all, but not for them to have the\n+\t * wrong contents.\n+\t */\n+\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_COMPLETE) == 0);\n+\tMyPgXact->delayChkpt |= DELAY_CHKPT_COMPLETE;\n+\n \t/*\n \t * We WAL-log the truncation before actually truncating, which means\n \t * trouble if the truncation fails. If we then crash, the WAL replay\n@@ -287,8 +304,15 @@ RelationTruncate(Relation rel, BlockNumber nblocks)\n \t\t\tXLogFlush(lsn);\n \t}\n \n-\t/* Do the real work */\n+\t/*\n+\t * This will first remove any buffers from the buffer pool that should no\n+\t * longer exist after truncation is complete, and then truncate the\n+\t * corresponding files on disk.\n+\t */\n \tsmgrtruncate(rel->rd_smgr, MAIN_FORKNUM, nblocks);\n+\n+\t/* We've done all the critical work, so checkpoints are OK now. */\n+\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_COMPLETE;\n }\n \n /*\ndiff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\nindex bafe91ab0d..0b7bdb8634 100644\n--- a/src/backend/storage/buffer/bufmgr.c\n+++ b/src/backend/storage/buffer/bufmgr.c\n@@ -3469,7 +3469,9 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\t\t * essential that CreateCheckpoint waits for virtual transactions\n \t\t\t * rather than full transactionids.\n \t\t\t */\n-\t\t\tMyPgXact->delayChkpt = delayChkpt = true;\n+\t\t\tAssert((MyPgXact->delayChkpt & DELAY_CHKPT_START) == 0);\n+\t\t\tMyPgXact->delayChkpt |= DELAY_CHKPT_START;\n+\t\t\tdelayChkpt = true;\n \t\t\tlsn = XLogSaveBufferForHint(buffer, buffer_std);\n \t\t}\n \n@@ -3502,7 +3504,7 @@ MarkBufferDirtyHint(Buffer buffer, bool buffer_std)\n \t\tUnlockBufHdr(bufHdr, buf_state);\n \n \t\tif (delayChkpt)\n-\t\t\tMyPgXact->delayChkpt = false;\n+\t\t\tMyPgXact->delayChkpt &= ~DELAY_CHKPT_START;\n \n \t\tif (dirtied)\n \t\t{\ndiff --git a/src/backend/storage/ipc/procarray.c b/src/backend/storage/ipc/procarray.c\nindex d739812f23..134b63f28b 100644\n--- a/src/backend/storage/ipc/procarray.c\n+++ b/src/backend/storage/ipc/procarray.c\n@@ -433,7 +433,10 @@ ProcArrayEndTransaction(PGPROC *proc, TransactionId latestXid)\n \t\tpgxact->xmin = InvalidTransactionId;\n \t\t/* must be cleared with xid/xmin: */\n \t\tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\t\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t\t/* be sure this is cleared in abort */\n+\t\tpgxact->delayChkpt = 0;\n+\n \t\tproc->recoveryConflictPending = false;\n \n \t\tAssert(pgxact->nxids == 0);\n@@ -455,7 +458,10 @@ ProcArrayEndTransactionInternal(PGPROC *proc, PGXACT *pgxact,\n \tpgxact->xmin = InvalidTransactionId;\n \t/* must be cleared with xid/xmin: */\n \tpgxact->vacuumFlags &= ~PROC_VACUUM_STATE_MASK;\n-\tpgxact->delayChkpt = false; /* be sure this is cleared in abort */\n+\n+\t/* be sure this is cleared in abort */\n+\tpgxact->delayChkpt = 0;\n+\n \tproc->recoveryConflictPending = false;\n \n \t/* Clear the subtransaction-XID cache too while holding the lock */\n@@ -2259,7 +2265,8 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * delaying checkpoint because they have critical actions in progress.\n *\n * Constructs an array of VXIDs of transactions that are currently in commit\n- * critical sections, as shown by having delayChkpt set in their PGXACT.\n+ * critical sections, as shown by having specified delayChkpt bits set in their\n+ * PGXACT.\n *\n * Returns a palloc'd array that should be freed by the caller.\n * *nvxids is the number of valid entries.\n@@ -2273,13 +2280,15 @@ GetOldestSafeDecodingTransactionId(bool catalogOnly)\n * for clearing of delayChkpt to propagate is unimportant for correctness.\n */\n VirtualTransactionId *\n-GetVirtualXIDsDelayingChkpt(int *nvxids)\n+GetVirtualXIDsDelayingChkpt(int *nvxids, int type)\n {\n \tVirtualTransactionId *vxids;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tcount = 0;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \t/* allocate what's certainly enough result space */\n \tvxids = (VirtualTransactionId *)\n \t\tpalloc(sizeof(VirtualTransactionId) * arrayP->maxProcs);\n@@ -2292,7 +2301,7 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n \t\tvolatile PGPROC *proc = &allProcs[pgprocno];\n \t\tvolatile PGXACT *pgxact = &allPgXact[pgprocno];\n \n-\t\tif (pgxact->delayChkpt)\n+\t\tif ((pgxact->delayChkpt & type) != 0)\n \t\t{\n \t\t\tVirtualTransactionId vxid;\n \n@@ -2318,12 +2327,14 @@ GetVirtualXIDsDelayingChkpt(int *nvxids)\n * those numbers should be small enough for it not to be a problem.\n */\n bool\n-HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n+HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids, int type)\n {\n \tbool\t\tresult = false;\n \tProcArrayStruct *arrayP = procArray;\n \tint\t\t\tindex;\n \n+\tAssert(type != 0);\n+\n \tLWLockAcquire(ProcArrayLock, LW_SHARED);\n \n \tfor (index = 0; index < arrayP->numProcs; index++)\n@@ -2335,7 +2346,8 @@ HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids)\n \n \t\tGET_VXID_FROM_PGPROC(vxid, *proc);\n \n-\t\tif (pgxact->delayChkpt && VirtualTransactionIdIsValid(vxid))\n+\t\tif ((pgxact->delayChkpt & type) != 0 &&\n+\t\t\tVirtualTransactionIdIsValid(vxid))\n \t\t{\n \t\t\tint\t\t\ti;\n \ndiff --git a/src/backend/storage/lmgr/proc.c b/src/backend/storage/lmgr/proc.c\nindex 857dfdab09..e5370df019 100644\n--- a/src/backend/storage/lmgr/proc.c\n+++ b/src/backend/storage/lmgr/proc.c\n@@ -377,7 +377,7 @@ InitProcess(void)\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \t/* NB -- autovac launcher intentionally does not set IS_AUTOVACUUM */\n \tif (IsAutoVacuumWorkerProcess())\n@@ -550,7 +550,7 @@ InitAuxiliaryProcess(void)\n \tMyProc->databaseId = InvalidOid;\n \tMyProc->roleId = InvalidOid;\n \tMyProc->isBackgroundWorker = IsBackgroundWorker;\n-\tMyPgXact->delayChkpt = false;\n+\tMyPgXact->delayChkpt = 0;\n \tMyPgXact->vacuumFlags = 0;\n \tMyProc->lwWaiting = false;\n \tMyProc->lwWaitMode = 0;\ndiff --git a/src/include/storage/proc.h b/src/include/storage/proc.h\nindex 947f69d634..d8dd7bf5e1 100644\n--- a/src/include/storage/proc.h\n+++ b/src/include/storage/proc.h\n@@ -75,6 +75,41 @@ struct XidCache\n */\n #define INVALID_PGPROCNO\t\tPG_INT32_MAX\n \n+/*\n+ * Flags for PGPROC.delayChkpt\n+ *\n+ * These flags can be used to delay the start or completion of a checkpoint\n+ * for short periods. A flag is in effect if the corresponding bit is set in\n+ * the PGPROC of any backend.\n+ *\n+ * For our purposes here, a checkpoint has three phases: (1) determine the\n+ * location to which the redo pointer will be moved, (2) write all the\n+ * data durably to disk, and (3) WAL-log the checkpoint.\n+ *\n+ * Setting DELAY_CHKPT_START prevents the system from moving from phase 1\n+ * to phase 2. This is useful when we are performing a WAL-logged modification\n+ * of data that will be flushed to disk in phase 2. By setting this flag\n+ * before writing WAL and clearing it after we've both written WAL and\n+ * performed the corresponding modification, we ensure that if the WAL record\n+ * is inserted prior to the new redo point, the corresponding data changes will\n+ * also be flushed to disk before the checkpoint can complete. (In the\n+ * extremely common case where the data being modified is in shared buffers\n+ * and we acquire an exclusive content lock on the relevant buffers before\n+ * writing WAL, this mechanism is not needed, because phase 2 will block\n+ * until we release the content lock and then flush the modified data to\n+ * disk.)\n+ *\n+ * Setting DELAY_CHKPT_COMPLETE prevents the system from moving from phase 2\n+ * to phase 3. This is useful if we are performing a WAL-logged operation that\n+ * might invalidate buffers, such as relation truncation. In this case, we need\n+ * to ensure that any buffers which were invalidated and thus not flushed by\n+ * the checkpoint are actaully destroyed on disk. Replay can cope with a file\n+ * or block that doesn't exist, but not with a block that has the wrong\n+ * contents.\n+ */\n+#define DELAY_CHKPT_START\t\t(1<<0)\n+#define DELAY_CHKPT_COMPLETE\t(1<<1)\n+\n /*\n * Each backend has a PGPROC struct in shared memory. There is also a list of\n * currently-unused PGPROC structs that will be reallocated to new backends.\n@@ -217,8 +252,7 @@ typedef struct PGXACT\n \n \tuint8\t\tvacuumFlags;\t/* vacuum-related flags, see above */\n \tbool\t\toverflowed;\n-\tbool\t\tdelayChkpt;\t\t/* true if this proc delays checkpoint start;\n-\t\t\t\t\t\t\t\t * previously called InCommit */\n+\tint\t\t\tdelayChkpt;\t\t/* for DELAY_CHKPT_* flags */\n \n \tuint8\t\tnxids;\n } PGXACT;\ndiff --git a/src/include/storage/procarray.h b/src/include/storage/procarray.h\nindex 08b4b030bb..2b60b27604 100644\n--- a/src/include/storage/procarray.h\n+++ b/src/include/storage/procarray.h\n@@ -92,8 +92,9 @@ extern TransactionId GetOldestXmin(Relation rel, int flags);\n extern TransactionId GetOldestActiveTransactionId(void);\n extern TransactionId GetOldestSafeDecodingTransactionId(bool catalogOnly);\n \n-extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids);\n-extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids, int nvxids);\n+extern VirtualTransactionId *GetVirtualXIDsDelayingChkpt(int *nvxids, int type);\n+extern bool HaveVirtualXIDsDelayingChkpt(VirtualTransactionId *vxids,\n+\t\t\t\t\t\t\t\t\t\t int nvxids, int type);\n \n extern PGPROC *BackendPidGetProc(int pid);\n extern PGPROC *BackendPidGetProcWithLock(int pid);\n-- \n2.27.0", "msg_date": "Fri, 18 Mar 2022 10:21:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Thu, Mar 17, 2022 at 9:21 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Finally, no two of from 10 to 14 doesn't accept the same patch.\n>\n> As a cross-version check, I compared all combinations of the patches\n> for two adjacent versions and confirmed that no hunks are lost.\n>\n> All versions pass check world.\n\nThanks, committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 15:33:29 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Thanks, committed.\n\nSome of the buildfarm is seeing failures in the pg_checksums test.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 18:04:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Thu, Mar 24, 2022 at 6:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Thanks, committed.\n>\n> Some of the buildfarm is seeing failures in the pg_checksums test.\n\nHmm. So the tests seem to be failing because 002_actions.pl stops the\ndatabase cluster, runs pg_checksums (which passes), writes some zero\nbytes over the line pointer array of the first block of pg_class, and\nthen runs pg_checksums again. In the failing buildfarm runs,\npg_checksums fails to detect the corruption: the second run succeeds,\nwhile pg_checksums expects it to fail. That's pretty curious, because\nif the database cluster is stopped, and things are OK at that point,\nthen how could a server bug of any kind cause a Perl script to be\nunable to corrupt a file on disk?\n\nA possible clue is that I also see a few machines failing in\nrecoveryCheck. And the code that is failing there looks like this:\n\n# We've seen occasional cases where multiple walsender pids are active. An\n# immediate shutdown may hide evidence of a locking bug. So if multiple\n# walsenders are observed, shut down in fast mode, and collect some more\n# information.\nif (not like($senderpid, qr/^[0-9]+$/, \"have walsender pid $senderpid\"))\n{\n my ($stdout, $stderr);\n $node_primary3->psql('postgres',\n \"\\\\a\\\\t\\nSELECT * FROM pg_stat_activity\",\n stdout => \\$stdout, stderr => \\$stderr);\n diag $stdout, $stderr;\n $node_primary3->stop('fast');\n $node_standby3->stop('fast');\n die \"could not determine walsender pid, can't continue\";\n}\n\nAnd the failure looks like this:\n\n# Failed test 'have walsender pid 1047504\n# 1047472'\n# at t/019_replslot_limit.pl line 343.\n\nThat sure looks like there are multiple walsender PIDs active, and the\npg_stat_activity output confirms it. 1047504 is running\nSTART_REPLICATION SLOT \"rep3\" 0/700000 TIMELINE 1 and 1047472 is\nrunning START_REPLICATION SLOT \"pg_basebackup_1047472\" 0/600000\nTIMELINE 1.\n\nBoth of these failures could possibly be explained by some failure of\nthings to shut down properly, but it's not the same things. In the\nfirst case, the database server would have had to still be running\nafter we run $node->stop, and it would have had to overwrite the bad\ncontents of pg_class with some good contents. In the second case, the\ncluster's supposed to still be running, but the backends that were\ncreating those replication slots should have exited sooner.\n\nI've been running the pg_checksums test in a loop here for a bit now\nin the hopes of being able to reproduce the failure, but it doesn't\nseem to want to fail here. And I've also looked over the commit and I\ncan't quite see how it would cause a process, or the cluster, to fail\nto shutdown, unless perhaps it's the checkpointer that gets stuck, but\nthat doesn't really seem to match the symptoms.\n\nAny ideas?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:37:25 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Thu, Mar 24, 2022 at 8:37 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Any ideas?\n\nAnd ... right after hitting send, I see that the recovery check\nfailures are under separate troubleshooting and thus probably\nunrelated. But that leaves me even more confused. How can a change to\nonly the server code cause a client utility to fail to detect\ncorruption that is being created by Perl while the server is stopped?\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:39:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> And ... right after hitting send, I see that the recovery check\n> failures are under separate troubleshooting and thus probably\n> unrelated.\n\nYeah, we've been chasing those for months.\n\n> But that leaves me even more confused. How can a change to\n> only the server code cause a client utility to fail to detect\n> corruption that is being created by Perl while the server is stopped?\n\nHmm, I'd supposed that the failing test cases were new as of 412ad7a55.\nNow I see they're not, which indeed puts quite a different spin on\nthings. Your thought about maybe the server isn't shut down yet is\ninteresting --- did 412ad7a55 touch anything about the shutdown\nsequence?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:45:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Thu, Mar 24, 2022 at 8:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm, I'd supposed that the failing test cases were new as of 412ad7a55.\n> Now I see they're not, which indeed puts quite a different spin on\n> things. Your thought about maybe the server isn't shut down yet is\n> interesting --- did 412ad7a55 touch anything about the shutdown\n> sequence?\n\nI hate to say \"no\" because the evidence suggests that the answer might\nbe \"yes\" -- but it definitely isn't intending to change anything about\nthe shutdown sequence. It just introduces a mechanism to backends to\nforce the checkpointer to delay writing the checkpoint record.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:08:23 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I hate to say \"no\" because the evidence suggests that the answer might\n> be \"yes\" -- but it definitely isn't intending to change anything about\n> the shutdown sequence. It just introduces a mechanism to backends to\n> force the checkpointer to delay writing the checkpoint record.\n\nWait a minute, I think we may be barking up the wrong tree.\n\nThe three commits that serinus saw as new in its first failure were\n\nce95c54376 Thu Mar 24 20:33:13 2022 UTC Fix pg_statio_all_tables view for multiple TOAST indexes. \n7dac61402e Thu Mar 24 19:51:40 2022 UTC Remove unused module imports from TAP tests \n412ad7a556 Thu Mar 24 18:52:28 2022 UTC Fix possible recovery trouble if TRUNCATE overlaps a checkpoint.\n\nI failed to look closely at dragonet, but I now see that its\nfirst failure saw\n\nce95c54376 Thu Mar 24 20:33:13 2022 UTC Fix pg_statio_all_tables view for multiple TOAST indexes. \n7dac61402e Thu Mar 24 19:51:40 2022 UTC Remove unused module imports from TAP tests\n\nserinus is 0-for-3 since then, and dragonet 0-for-4, so we can be pretty\nconfident that the failure is repeatable for them. That means that the\nculprit must be ce95c54376 or 7dac61402e, not anything nearby such as\n412ad7a556.\n\nIt's *really* hard to see how the pg_statio_all_tables change could\nhave affected this. So that leaves 7dac61402e, which did this to\nthe test script that's failing:\n \n use strict;\n use warnings;\n-use Config;\n use PostgreSQL::Test::Cluster;\n use PostgreSQL::Test::Utils;\n\nDiscuss.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:22:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 20:39:27 -0400, Robert Haas wrote:\n> But that leaves me even more confused. How can a change to only the server\n> code cause a client utility to fail to detect corruption that is being\n> created by Perl while the server is stopped?\n\nI guess it could somehow cause the first page to be all zeroes, in which case\noverwriting it with more zeroes wouldn't cause a problem that pg_checksums can\nsee? But I have a somewhat more realistic idea:\n\nI'm suspicious of pg_checksums --filenode. If I understand correctly\n--filenode scans the data directory, including all tablespaces, for a file\nmatching that filenode. If we somehow end up with a leftover file in the pre\nALTER TABLE SET TABLESPACE location, it'd not notice that there *also* is a\nfile in a different place?\n\nPerhaps the --filenode mode should print out the file location...\n\n\nRandomly noticed: The test fetches the block size without doing anything with\nit afaics.\n\nAndres\n\n\n", "msg_date": "Thu, 24 Mar 2022 18:23:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "I wrote:\n> ... So that leaves 7dac61402e, which did this to\n> the test script that's failing:\n \n> use strict;\n> use warnings;\n> -use Config;\n> use PostgreSQL::Test::Cluster;\n> use PostgreSQL::Test::Utils;\n\n> Discuss.\n\nAnother thing that seems quite baffling, but is becoming clearer by\nthe hour, is that only serinus and dragonet are seeing this failure.\nHow is that? They're not very similarly configured --- one is gcc,\none clang, and one uses jit and one doesn't. They do share the same\nperl version, 5.34.0; but so do twenty-three other animals, many of\nwhich have reported in cleanly. I'm at a loss to explain that.\nAndres, can you think of anything that's peculiar to those two\nanimals?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:59:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 21:22:38 -0400, Tom Lane wrote:\n> serinus is 0-for-3 since then, and dragonet 0-for-4, so we can be pretty\n> confident that the failure is repeatable for them.\n\nThat's weird. They run on the same host, but otherwise they have very little\nin common. There's plenty other animals running on the same machine that\ndidn't report errors.\n\nI copied serinus' configuration, ran the tests repeatedly, without reproducing\nthe failure so far. Odd.\n\nCombined with the replslot failure I'd be prepared to think the machine has\nissues, except that the replslot thing triggered on other machines too.\n\n\nI looked through logs on the machine without finding anything indicating\nsomething odd.\n\nI turned on keep_error_builds for serinus. Hopefully that'll leave us with\non-disk files to inspect.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:14:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 21:59:08 -0400, Tom Lane wrote:\n> Another thing that seems quite baffling, but is becoming clearer by\n> the hour, is that only serinus and dragonet are seeing this failure.\n> How is that? They're not very similarly configured --- one is gcc,\n> one clang, and one uses jit and one doesn't. They do share the same\n> perl version, 5.34.0; but so do twenty-three other animals, many of\n> which have reported in cleanly. I'm at a loss to explain that.\n> Andres, can you think of anything that's peculiar to those two\n> animals?\n\nNo, I'm quite baffled myself. As I noted in an email I just sent, before\nreading this one, I can't explain it, and at least in simple attempts, can't\nreproduce it either. And there are animals much closer to each other than\nthose two...\n\nI forced a run while writing the other email, with keep_error_whatnot, and I\njust saw it failing... Looking whether there's anything interesting to glean.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:20:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-24 21:22:38 -0400, Tom Lane wrote:\n> > serinus is 0-for-3 since then, and dragonet 0-for-4, so we can be pretty\n> > confident that the failure is repeatable for them.\n>\n> That's weird. They run on the same host, but otherwise they have very little\n> in common. There's plenty other animals running on the same machine that\n> didn't report errors.\n\nOne random thing I've noticed about serinus is that it seems to drop\nUDP packets more than others, but dragonet apparently doesn't:\n\ntmunro=> select animal, count(*) from run where result = 'FAILURE' and\n'stats' = any(fail_tests) and snapshot > now() - interval '3 month'\ngroup by 1 order by 2 desc;\n animal | count\n--------------+-------\n serinus | 14\n flaviventris | 6\n mandrill | 2\n bonito | 1\n seawasp | 1\n crake | 1\n sungazer | 1\n(7 rows)\n\nExample: https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=serinus&dt=2022-03-24%2001:00:14\n\n\n", "msg_date": "Fri, 25 Mar 2022 15:23:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 15:23:24 +1300, Thomas Munro wrote:\n> One random thing I've noticed about serinus is that it seems to drop\n> UDP packets more than others, but dragonet apparently doesn't:\n\nSerinus is built with optimization. Which I guess could lead to other backends\nreporting stats more quickly? And of course could lead to running more often\n(due to finishing before the next cron invocation). I think I've also\nconfigured my animals to run more often than many other owners.\n\nSo I'm not sure how much can be gleaned from raw \"failure counts\" without\ntaking the number of runs into account as well?\n\n- Andres\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:35:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 19:20:10 -0700, Andres Freund wrote:\n> I forced a run while writing the other email, with keep_error_whatnot, and I\n> just saw it failing... Looking whether there's anything interesting to glean.\n\nUnfortunately the test drops the table and it doesn't report the filepath of\nthe failure. So I haven't learned much from the data dir so far.\n\n\nI still don't see a failure when running the tests in a separate source\ntree. Can't explain that. Going to try to get closer to the buildfarm script\nrun - it'd be a whole lot easier to be able to edit the source of the test and\nreproduce...\n\nJust to be sure I'm going to clean out serinus' ccache dir and rerun. I'll\nleave dragonet's alone for now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 19:43:02 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 3:35 PM Andres Freund <andres@anarazel.de> wrote:\n> So I'm not sure how much can be gleaned from raw \"failure counts\" without\n> taking the number of runs into account as well?\n\nAh, right, it does indeed hold the record for most runs in 3 months,\nand taking runs into account its \"stats\" failure rate is clustered\nwith mandrill and seawasp. Anyway, clearly not relevant because\ndragonet doesn't even show up in the list.\n\n animal | runs | stats_test_fail_fraction\n---------------+------+--------------------------\n mandrill | 158 | 0.0126582278481013\n seawasp | 85 | 0.0117647058823529\n serinus | 1299 | 0.0107775211701309\n sungazer | 174 | 0.00574712643678161\n flaviventris | 1292 | 0.00464396284829721\n bonito | 313 | 0.00319488817891374\n crake | 743 | 0.00134589502018843\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:02:52 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 19:43:02 -0700, Andres Freund wrote:\n> Just to be sure I'm going to clean out serinus' ccache dir and rerun. I'll\n> leave dragonet's alone for now.\n\nTurns out they had the same dir. But it didn't help.\n\nI haven't yet figured out why, but I now *am* able to reproduce the problem in\nthe buildfarm built tree. Wonder if there's a path length issue or such\nsomewhere?\n\nEither way, I can now manipulate the tests and still repro. I made the test\nabort after the first failure.\n\nhexedit shows that the file is modified, as we'd expect:\n00000000 00 00 00 00 C0 01 5B 01 16 7D 00 00 A0 03 C0 03 00 20 04 20 00 00 00 00 00 00 00 00 00 00 00 00 ......[..}....... . ............\n00000020 00 9F 38 00 80 9F 38 00 60 9F 38 00 40 9F 38 00 20 9F 38 00 00 9F 38 00 E0 9E 38 00 C0 9E 38 00 ..8...8.`.8.@.8. .8...8...8...8.\n\nAnd we are checking the right file:\n\nbf@andres-postgres-edb-buildfarm-v1:~/build/buildfarm-serinus/HEAD/pgsql.build$ tmp_install/home/bf/build/buildfarm-serinus/HEAD/inst/bin/pg_checksums --check -D /home/bf/build/buildfarm-serinus/HEAD/pgsql.build/src/bin/pg_checksums/tmp_check/t_002_actions_node_checksum_data/pgdata --filenode 16391 -v\npg_checksums: checksums verified in file \"/home/bf/build/buildfarm-serinus/HEAD/pgsql.build/src/bin/pg_checksums/tmp_check/t_002_actions_node_checksum_data/pgdata/pg_tblspc/16387/PG_15_202203241/5/16391\"\nChecksum operation completed\nFiles scanned: 1\nBlocks scanned: 45\nBad checksums: 0\nData checksum version: 1\n\nIf I twiddle further bits, I see that page failing checksum verification, as\nexpected.\n\nI made the script copy the file before twiddling it around:\n00000000 00 00 00 00 C0 01 5B 01 16 7D 00 00 A0 03 C0 03 00 20 04 20 00 00 00 00 E0 9F 38 00 C0 9F 38 00 ......[..}....... . ......8...8.\n00000020 A0 9F 38 00 80 9F 38 00 60 9F 38 00 40 9F 38 00 20 9F 38 00 00 9F 38 00 E0 9E 38 00 C0 9E 38 00 ..8...8.`.8.@.8. .8...8...8...8.\n\nSo it's indeed modified.\n\n\nThe only thing I can really conclude here is that we apparently end up with\nthe same checksum for exactly the modifications we are doing? Just on those\ntwo damn instances? Reliably?\n\n\nGotta make some food. Suggestions what exactly to look at welcome.\n\n\nGreetings,\n\nAndres Freund\n\nPS: I should really rename the hostname of that machine one of these days...\n\n\n", "msg_date": "Thu, 24 Mar 2022 20:43:01 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The only thing I can really conclude here is that we apparently end up with\n> the same checksum for exactly the modifications we are doing? Just on those\n> two damn instances? Reliably?\n\nIIRC, the table's OID or relfilenode enters into the checksum.\nCould it be that assigning a specific OID to the table allows\nthis to happen, and these two animals are somehow assigning\nthat OID while others are using some slightly different OID?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 00:08:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 00:08:20 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > The only thing I can really conclude here is that we apparently end up with\n> > the same checksum for exactly the modifications we are doing? Just on those\n> > two damn instances? Reliably?\n> \n> IIRC, the table's OID or relfilenode enters into the checksum.\n> Could it be that assigning a specific OID to the table allows\n> this to happen, and these two animals are somehow assigning\n> that OID while others are using some slightly different OID?\n\nIt's just the block number that goes into it.\n\nI do see that the LSN that ends up on the page is the same across a few runs\nof the test on serinus. Which presumably differs between different\nanimals. Surprised that it's this predictable - but I guess the run is short\nenough that there's no variation due to autovacuum, checkpoints etc.\n\nIf I add a 'SELECT txid_current()' before the CREATE TABLE in\ncheck_relation_corruption(), the test doesn't fail anymore, because there's an\nadditional WAL record.\n\n16bit checksums for the win.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 21:54:38 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I do see that the LSN that ends up on the page is the same across a few runs\n> of the test on serinus. Which presumably differs between different\n> animals. Surprised that it's this predictable - but I guess the run is short\n> enough that there's no variation due to autovacuum, checkpoints etc.\n\nUh-huh. I'm not surprised that it's repeatable on a given animal.\nWhat remains to be explained:\n\n1. Why'd it start failing now? I'm guessing that ce95c5437 *was* the\nculprit after all, by slightly changing the amount of catalog data\nwritten during initdb, and thus moving the initial LSN.\n\n2. Why just these two animals? If initial LSN is the critical thing,\nthen the results of \"locale -a\" would affect it, so platform\ndependence is hardly surprising ... but I'd have thought that all\nthe animals on that host would use the same initial set of\ncollations. OTOH, I see petalura and pogona just fell over too.\nDo you have some of those animals --with-icu and others not?\n\n> 16bit checksums for the win.\n\nYay :-(\n\nAs for a fix, would damaging more of the page help? I guess\nit'd just move around the one-in-64K chance of failure.\nMaybe we have to intentionally corrupt (e.g. invert) the\nchecksum field specifically.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 01:23:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-24 21:54:38 -0700, Andres Freund wrote:\n> I do see that the LSN that ends up on the page is the same across a few runs\n> of the test on serinus. Which presumably differs between different\n> animals. Surprised that it's this predictable - but I guess the run is short\n> enough that there's no variation due to autovacuum, checkpoints etc.\n\nThis actually explains how the issue could start to be visible with\nce95c543763. It changes the amount of WAL initdb generates and therefore\ninfluences what LSN the page ends up with. I've verified that the failing\ntest is reproducible with ce95c543763, but not its parent 7dac61402e3. While\nof course ce95c543763 isn't \"actually responsible\".\n\nAh, and that's finally also the explanation why I couldn't reproduce the\nfailure it in a different directory, with an otherwise identically configured\nPG: The length of the path to the tablespace influences the size of the\nXLOG_TBLSPC_CREATE record.\n\n\nNot sure what to do here... I guess we can just change the value we overwrite\nthe page with and hope to not hit this again? But that feels deeply deeply\nunsatisfying.\n\nPerhaps it would be enough to write into multiple parts of the page? I am very\nmuch not a cryptographical expert, but the way pg_checksum_block() works, it\nlooks to me that \"multiple\" changes within a 16 byte chunk have a smaller\ninfluence on the overall result than the same \"amount\" of changes to separate\n16 byte chunks.\n\n\nI might have to find a store still selling strong beverages at this hour.\n\n- Andres\n\n\n", "msg_date": "Thu, 24 Mar 2022 22:26:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 01:23:00 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I do see that the LSN that ends up on the page is the same across a few runs\n> > of the test on serinus. Which presumably differs between different\n> > animals. Surprised that it's this predictable - but I guess the run is short\n> > enough that there's no variation due to autovacuum, checkpoints etc.\n> \n> Uh-huh. I'm not surprised that it's repeatable on a given animal.\n> What remains to be explained:\n> \n> 1. Why'd it start failing now? I'm guessing that ce95c5437 *was* the\n> culprit after all, by slightly changing the amount of catalog data\n> written during initdb, and thus moving the initial LSN.\n\nYep, verified that (see mail I just sent).\n\n\n> 2. Why just these two animals? If initial LSN is the critical thing,\n> then the results of \"locale -a\" would affect it, so platform\n> dependence is hardly surprising ... but I'd have thought that all\n> the animals on that host would use the same initial set of\n> collations.\n\nI think it's the animal's name that makes the difference, due to the\ntablespace path lenght thing. And while I was confused for a second by\n\npetalura\npogona\nserinus\ndragonet\n\nfailing, despite different name lengths, it still makes sense: We MAXALIGN the\nstart of records. Which explains why flaviventris didn't fail the same way.\n\n\n> As for a fix, would damaging more of the page help? I guess\n> it'd just move around the one-in-64K chance of failure.\n\nAs I wrote in the other email, I think spreading the changes out wider might\nhelp. But it's still not great. However:\n\n> Maybe we have to intentionally corrupt (e.g. invert) the\n> checksum field specifically.\n\nseems like it'd do the trick? Even a single bit change of the checksum ought\nto do, as long as we don't set it to 0.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 22:34:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Ah, and that's finally also the explanation why I couldn't reproduce the\n> failure it in a different directory, with an otherwise identically configured\n> PG: The length of the path to the tablespace influences the size of the\n> XLOG_TBLSPC_CREATE record.\n\nOoooohhh ... yeah, that could explain a lot of cross-animal variation.\n\n> Not sure what to do here... I guess we can just change the value we overwrite\n> the page with and hope to not hit this again? But that feels deeply deeply\n> unsatisfying.\n\nAFAICS, this strategy of whacking a predetermined chunk of the page with\na predetermined value is going to fail 1-out-of-64K times. We have to\nchange the test so that it's guaranteed to produce an invalid checksum.\nInverting just the checksum field, without doing anything else, would\ndo that ... but that feels pretty unsatisfying too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 01:38:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 01:38:45 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Not sure what to do here... I guess we can just change the value we overwrite\n> > the page with and hope to not hit this again? But that feels deeply deeply\n> > unsatisfying.\n> \n> AFAICS, this strategy of whacking a predetermined chunk of the page with\n> a predetermined value is going to fail 1-out-of-64K times.\n\nYea. I suspect that the way the modifications and checksumming are done are\nactually higher chance than 1/64k. But even it actually is 1/64k, it's not\ngreat to wait for (#animals * #catalog-changes) to approach a decent\npercentage of 1/64k.\n\n\nI'm was curious whether there have been similar issues in the past. Querying\nthe buildfarm logs suggests not, at least not in the pg_checksums test.\n\n\n> We have to change the test so that it's guaranteed to produce an invalid\n> checksum. Inverting just the checksum field, without doing anything else,\n> would do that ... but that feels pretty unsatisfying too.\n\nWe really ought to find a way to get to wider checksums :/\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 24 Mar 2022 23:07:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 2:07 AM Andres Freund <andres@anarazel.de> wrote:\n> We really ought to find a way to get to wider checksums :/\n\nEh, let's just use longer names for the buildfarm animals and call it good. :-)\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:22:04 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-03-25 01:38:45 -0400, Tom Lane wrote:\n>> AFAICS, this strategy of whacking a predetermined chunk of the page with\n>> a predetermined value is going to fail 1-out-of-64K times.\n\n> Yea. I suspect that the way the modifications and checksumming are done are\n> actually higher chance than 1/64k. But even it actually is 1/64k, it's not\n> great to wait for (#animals * #catalog-changes) to approach a decent\n> percentage of 1/64k.\n\nExactly.\n\n> I'm was curious whether there have been similar issues in the past. Querying\n> the buildfarm logs suggests not, at least not in the pg_checksums test.\n\nThat test has only been there since 2018 (b34e84f16). We've probably\naccumulated a couple hundred initial-catalog-contents changes since\nthen, so maybe this failure arrived right on schedule :-(.\n\n> We really ought to find a way to get to wider checksums :/\n\nThat'll just reduce the probability of failure, not eliminate it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:49:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 9:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That'll just reduce the probability of failure, not eliminate it.\n\nI mean, if the expected time to the first failure on even 1 machine\nexceeds the time until the heat death of the universe by 10 orders of\nmagnitude, it's probably good enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:53:57 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 25, 2022 at 9:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That'll just reduce the probability of failure, not eliminate it.\n\n> I mean, if the expected time to the first failure on even 1 machine\n> exceeds the time until the heat death of the universe by 10 orders of\n> magnitude, it's probably good enough.\n\nAdding another 16 bits won't get you to that, sadly. Yeah, it *might*\nextend the MTTF to more than the project's likely lifespan, but that\ndoesn't mean we couldn't get unlucky next week.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:02:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Fri, Mar 25, 2022 at 9:49 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> That'll just reduce the probability of failure, not eliminate it.\n>\n> > I mean, if the expected time to the first failure on even 1 machine\n> > exceeds the time until the heat death of the universe by 10 orders of\n> > magnitude, it's probably good enough.\n>\n> Adding another 16 bits won't get you to that, sadly. Yeah, it *might*\n> extend the MTTF to more than the project's likely lifespan, but that\n> doesn't mean we couldn't get unlucky next week.\n\nI suspect that the number of bits Andres wants to add is no less than 48.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:13:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Mar 25, 2022 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Adding another 16 bits won't get you to that, sadly. Yeah, it *might*\n>> extend the MTTF to more than the project's likely lifespan, but that\n>> doesn't mean we couldn't get unlucky next week.\n\n> I suspect that the number of bits Andres wants to add is no less than 48.\n\nI dunno. Compatibility and speed concerns aside, that seems like an awful\nlot of bits to be expending on every page compared to the value.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:34:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I dunno. Compatibility and speed concerns aside, that seems like an awful\n> lot of bits to be expending on every page compared to the value.\n\nI dunno either, but over on the TDE thread people seemed quite willing\nto expend like 16-32 *bytes* for page verifiers and nonces and things.\nFor compatibility and speed reasons, I doubt we could ever get by with\ndoing that in every cluster, but I do have some hope of introducing\nsomething like that someday at least as an optional feature. It's not\nlike a 16-bit checksum was state-of-the-art even when we introduced\nit. We just did it because we had 2 bytes that we could repurpose\nrelatively painlessly, and not any larger number. And that's still the\ncase today, so at least in the short term we will have to choose some\nother solution to this problem.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Mar 2022 10:49:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> ... It's not\n> like a 16-bit checksum was state-of-the-art even when we introduced\n> it. We just did it because we had 2 bytes that we could repurpose\n> relatively painlessly, and not any larger number. And that's still the\n> case today, so at least in the short term we will have to choose some\n> other solution to this problem.\n\nIndeed. I propose the attached, which also fixes the unsafe use\nof seek() alongside syswrite(), directly contrary to what \"man perlfunc\"\nsays to do.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 25 Mar 2022 11:50:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Hi,\n\nOn 2022-03-25 11:50:48 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > ... It's not\n> > like a 16-bit checksum was state-of-the-art even when we introduced\n> > it. We just did it because we had 2 bytes that we could repurpose\n> > relatively painlessly, and not any larger number. And that's still the\n> > case today, so at least in the short term we will have to choose some\n> > other solution to this problem.\n> \n> Indeed. I propose the attached, which also fixes the unsafe use\n> of seek() alongside syswrite(), directly contrary to what \"man perlfunc\"\n> says to do.\n\nThat looks reasonable. Although I wonder if we loose something by not testing\nthe influence of the rest of the block - but I don't really see anything.\n\nThe same code also exists in src/bin/pg_basebackup/t/010_pg_basebackup.pl,\nwhich presumably has the same collision risks. Perhaps we should put a\nfunction into Cluster.pm and use it from both?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Mar 2022 09:11:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> The same code also exists in src/bin/pg_basebackup/t/010_pg_basebackup.pl,\n> which presumably has the same collision risks.\n\nOooh, I missed that.\n\n> Perhaps we should put a\n> function into Cluster.pm and use it from both?\n\n+1, I'll make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 12:20:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> ... It's not\n>> like a 16-bit checksum was state-of-the-art even when we introduced\n>> it. We just did it because we had 2 bytes that we could repurpose\n>> relatively painlessly, and not any larger number. And that's still the\n>> case today, so at least in the short term we will have to choose some\n>> other solution to this problem.\n>\n> Indeed. I propose the attached, which also fixes the unsafe use\n> of seek() alongside syswrite(), directly contrary to what \"man perlfunc\"\n> says to do.\n\nLGTM, but it would be good to include $! in the die messages.\n\n- ilmari\n\n> \t\t\tregards, tom lane\n>\n> diff --git a/src/bin/pg_checksums/t/002_actions.pl b/src/bin/pg_checksums/t/002_actions.pl\n> index 62c608eaf6..8c70453a45 100644\n> --- a/src/bin/pg_checksums/t/002_actions.pl\n> +++ b/src/bin/pg_checksums/t/002_actions.pl\n> @@ -24,6 +24,7 @@ sub check_relation_corruption\n> \tmy $tablespace = shift;\n> \tmy $pgdata = $node->data_dir;\n> \n> +\t# Create table and discover its filesystem location.\n> \t$node->safe_psql(\n> \t\t'postgres',\n> \t\t\"CREATE TABLE $table AS SELECT a FROM generate_series(1,10000) AS a;\n> @@ -37,9 +38,6 @@ sub check_relation_corruption\n> \tmy $relfilenode_corrupted = $node->safe_psql('postgres',\n> \t\t\"SELECT relfilenode FROM pg_class WHERE relname = '$table';\");\n> \n> -\t# Set page header and block size\n> -\tmy $pageheader_size = 24;\n> -\tmy $block_size = $node->safe_psql('postgres', 'SHOW block_size;');\n> \t$node->stop;\n> \n> \t# Checksums are correct for single relfilenode as the table is not\n> @@ -55,8 +53,12 @@ sub check_relation_corruption\n> \n> \t# Time to create some corruption\n> \topen my $file, '+<', \"$pgdata/$file_corrupted\";\n> -\tseek($file, $pageheader_size, SEEK_SET);\n> -\tsyswrite($file, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\");\n> +\tmy $pageheader;\n> +\tsysread($file, $pageheader, 24) or die \"sysread failed\";\n> +\t# This inverts the pd_checksum field (only); see struct PageHeaderData\n> +\t$pageheader ^= \"\\0\\0\\0\\0\\0\\0\\0\\0\\xff\\xff\";\n> +\tsysseek($file, 0, 0) or die \"sysseek failed\";\n> +\tsyswrite($file, $pageheader) or die \"syswrite failed\";\n> \tclose $file;\n> \n> \t# Checksum checks on single relfilenode fail\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:26:52 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> LGTM, but it would be good to include $! in the die messages.\n\nRoger, will do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 25 Mar 2022 13:31:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Fri, Mar 25, 2022 at 10:34:49AM -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Fri, Mar 25, 2022 at 10:02 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Adding another 16 bits won't get you to that, sadly. Yeah, it *might*\n>>> extend the MTTF to more than the project's likely lifespan, but that\n>>> doesn't mean we couldn't get unlucky next week.\n> \n>> I suspect that the number of bits Andres wants to add is no less than 48.\n> \n> I dunno. Compatibility and speed concerns aside, that seems like an awful\n> lot of bits to be expending on every page compared to the value.\n\nErr. And there are not that many bits that could be recycled for this\npurpose in the current page layout, aren't there?\n--\nMichael", "msg_date": "Sat, 26 Mar 2022 15:03:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "At Thu, 24 Mar 2022 15:33:29 -0400, Robert Haas <robertmhaas@gmail.com> wrote in \n> On Thu, Mar 17, 2022 at 9:21 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > All versions pass check world.\n> \n> Thanks, committed.\n\n(I was overwhelmed by the flood of following discussion..)\n\nAnyway, thanks for picking up this and committing!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 28 Mar 2022 09:59:02 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Fri, Mar 25, 2022 at 10:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I dunno. Compatibility and speed concerns aside, that seems like an awful\n> > lot of bits to be expending on every page compared to the value.\n> \n> I dunno either, but over on the TDE thread people seemed quite willing\n> to expend like 16-32 *bytes* for page verifiers and nonces and things.\n\nAbsolutely.\n\n> For compatibility and speed reasons, I doubt we could ever get by with\n> doing that in every cluster, but I do have some hope of introducing\n> something like that someday at least as an optional feature. It's not\n> like a 16-bit checksum was state-of-the-art even when we introduced\n> it. We just did it because we had 2 bytes that we could repurpose\n> relatively painlessly, and not any larger number. And that's still the\n> case today, so at least in the short term we will have to choose some\n> other solution to this problem.\n\nI agree that this would be great as an optional feature. Those patches\nto allow the system to be built with reserved space for $whatever would\nallow us to have a larger checksum for those who want it and perhaps a\nnonce with TDE for those who wish that in the future. I mentioned\nbefore that I thought it might be a good way to introduce page-level\nepochs for 64bit xids too though it never seemed to get much traction.\n\nAnyhow, this whole thread has struck me as a good reason to polish those\npatches off and add on top of them an extended checksum ability, first,\nindependent of TDE, and remove the dependency of those patches from the\nTDE effort and instead allow it to just leverage that ability. I still\nsuspect we'll have some folks who will want TDE w/o a per-page nonce and\nthat could be an option but we'd be able to support TDE w/ integrity\npretty easily, which would be fantastic.\n\nThanks,\n\nStephen", "msg_date": "Tue, 29 Mar 2022 12:34:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" }, { "msg_contents": "On Tue, Mar 29, 2022 at 12:34 PM Stephen Frost <sfrost@snowman.net> wrote:\n> Anyhow, this whole thread has struck me as a good reason to polish those\n> patches off and add on top of them an extended checksum ability, first,\n> independent of TDE, and remove the dependency of those patches from the\n> TDE effort and instead allow it to just leverage that ability. I still\n> suspect we'll have some folks who will want TDE w/o a per-page nonce and\n> that could be an option but we'd be able to support TDE w/ integrity\n> pretty easily, which would be fantastic.\n\nYes, I like that idea. Once we get beyond feature freeze, perhaps we\ncan try to coordinate to avoid duplication of effort -- or absence of\neffort.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 29 Mar 2022 13:04:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Corruption during WAL replay" } ]
[ { "msg_contents": "Hi,\n\nWhile looking to understand what could be going on with [1], I think I\nmight have stumbled on a few issues that could at least explain parts of\nthe problem.\n\nFirst, it seems to me that vac_update_datfrozenxid() can spuriously\n(but temporarily) fail due to the 'bogus'\nlogic. vac_update_datfrozenxid() first determines the 'newest' xid it\naccepts:\n /*\n * Identify the latest relfrozenxid and relminmxid values that we could\n * validly see during the scan. These are conservative values, but it's\n * not really worth trying to be more exact.\n */\n lastSaneFrozenXid = ReadNewTransactionId();\n lastSaneMinMulti = ReadNextMultiXactId();\n\nand then starts a full table scan of pg_class to determine the database\nwide relfrozenxid:\n scan = systable_beginscan(relation, InvalidOid, false,\n NULL, 0, NULL);\n\nWhenever vac_update_datfrozenxid() encounters a relfrozenxid that's \"too\nnew\", it'll bail out:\n /*\n * If things are working properly, no relation should have a\n * relfrozenxid or relminmxid that is \"in the future\". However, such\n * cases have been known to arise due to bugs in pg_upgrade. If we\n * see any entries that are \"in the future\", chicken out and don't do\n * anything. This ensures we won't truncate clog before those\n * relations have been scanned and cleaned up.\n */\n if (TransactionIdPrecedes(lastSaneFrozenXid, classForm->relfrozenxid) ||\n MultiXactIdPrecedes(lastSaneMinMulti, classForm->relminmxid))\n {\n bogus = true;\n break;\n }\n\nWhich all in theory makes sense - but there's two fairly sizable races\nhere:\n1) Because lastSaneFrozenXid is determined *before* the snapshot for\n the pg_class, it's entirely possible that concurrently another vacuum on\n another table in the same database starts & finishes. And that very well\n can end up using a relfrozenxid that's newer than the current\n database.\n2) Much worse, because vac_update_relstats() uses heap_inplace_update(),\n there's actually no snapshot semantics here anyway! Which means that\n the window for a race isn't just between the ReadNewTransactionId()\n and the systable_beginscan(), but instead lasts for the duration of\n the entire scan.\n\nEither of these can then triggers vac_update_datfrozenxid() to\n*silently* bail out without touching anything.\n\n\n\nI think there's also another (even larger?) race in\nvac_update_datfrozenxid(): Unless I miss something, two backends can\nconcurrently run through the scan in vac_update_datfrozenxid() for two\ndifferent tables in the same database, both can check that they need to\nupdate the pg_database row, and then one of them can overwrite the\nresults of the other. And the one that updates last might actually be\nthe one with an older horizon. This is possible since there is no 'per\ndatabase' locking in heap_vacuum_rel().\n\n\n\nThe way I suspect that interacts with the issue in [1] ist that once\nthat has happened, do_start_worker() figures out it's doing a xid\nwraparound vacuum based on the \"wrong\" datfrozenxid:\n\t\t/* Check to see if this one is at risk of wraparound */\n\t\tif (TransactionIdPrecedes(tmp->adw_frozenxid, xidForceLimit))\n\t\t{\n\t\t\tif (avdb == NULL ||\n\t\t\t\tTransactionIdPrecedes(tmp->adw_frozenxid,\n\t\t\t\t\t\t\t\t\t avdb->adw_frozenxid))\n\t\t\t\tavdb = tmp;\n\t\t\tfor_xid_wrap = true;\n\t\t\tcontinue;\n\t\t}\n\t\telse if (for_xid_wrap)\n\t\t\tcontinue;\t\t\t/* ignore not-at-risk DBs */\n\nwhich means autovac launcher will only start workers for that database -\nbut there might actually not be any work for it.\n\n\nI have some theories as to why this is more common in 12, but I've not\nfully nailed them down.\n\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20200323152247.GB52612%40nol\n\n\n", "msg_date": "Mon, 23 Mar 2020 16:50:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Autovacuum vs vac_update_datfrozenxid() vs ?" }, { "msg_contents": "On Mon, Mar 23, 2020 at 04:50:36PM -0700, Andres Freund wrote:\n> Which all in theory makes sense - but there's two fairly sizable races\n> here:\n> 1) Because lastSaneFrozenXid is determined *before* the snapshot for\n> the pg_class, it's entirely possible that concurrently another vacuum on\n> another table in the same database starts & finishes. And that very well\n> can end up using a relfrozenxid that's newer than the current\n> database.\n> 2) Much worse, because vac_update_relstats() uses heap_inplace_update(),\n> there's actually no snapshot semantics here anyway! Which means that\n> the window for a race isn't just between the ReadNewTransactionId()\n> and the systable_beginscan(), but instead lasts for the duration of\n> the entire scan.\n> \n> Either of these can then triggers vac_update_datfrozenxid() to\n> *silently* bail out without touching anything.\n\nHmm. It looks that you are right for both points, with 2) being much\nmore plausible to trigger. Don't we have other issues with the cutoff\ncalculations done within vacuum_set_xid_limits()? We decide if an\naggressive job happens depending on the data in the relation cache,\nbut that may not play well with concurrent jobs that just finished\nwith invalidations?\n\n> I think there's also another (even larger?) race in\n> vac_update_datfrozenxid(): Unless I miss something, two backends can\n> concurrently run through the scan in vac_update_datfrozenxid() for two\n> different tables in the same database, both can check that they need to\n> update the pg_database row, and then one of them can overwrite the\n> results of the other. And the one that updates last might actually be\n> the one with an older horizon. This is possible since there is no 'per\n> database' locking in heap_vacuum_rel().\n\nThe chances of hitting this gets higher with a higher number of max\nworkers, and it is easy to finish with multiple concurrent workers on\nthe same database with a busy system. Perhaps a reason why it was\nharder for me to reproduce the problem that was that I was just using\nthe default for autovacuum_max_workers. Looks worth trying with a\ncrazily high value for the max number of workers and more relations\ndoing a bunch of heavy updates (the application I saw facing a\nlockdown uses literally hundreds of relations with a poor man's\npartitioning schema and some tables of the schema are heavily\nupdated).\n\n> The way I suspect that interacts with the issue in [1] ist that once\n> that has happened, do_start_worker() figures out it's doing a xid\n> wraparound vacuum based on the \"wrong\" datfrozenxid:\n> \t\t/* Check to see if this one is at risk of wraparound */\n> \t\tif (TransactionIdPrecedes(tmp->adw_frozenxid, xidForceLimit))\n> \t\t{\n> \t\t\tif (avdb == NULL ||\n> \t\t\t\tTransactionIdPrecedes(tmp->adw_frozenxid,\n> \t\t\t\t\t\t\t\t\t avdb->adw_frozenxid))\n> \t\t\t\tavdb = tmp;\n> \t\t\tfor_xid_wrap = true;\n> \t\t\tcontinue;\n> \t\t}\n> \t\telse if (for_xid_wrap)\n> \t\t\tcontinue;\t\t\t/* ignore not-at-risk DBs */\n> \n> which means autovac launcher will only start workers for that database -\n> but there might actually not be any work for it.\n\nActually, I don't think that pg_database.datfrozenxid explains\neverything. As per what I saw this field is getting freshly updated.\nSo workers are spawned for a wraparound, and the relation-level stats\ncause workers to try to autovacuum a table but nothing happens at the\nend.\n\n> I have some theories as to why this is more common in 12, but I've not\n> fully nailed them down.\n\nIt seems to me that 2aa6e33 also helps in triggering those issues more\neasily as it creates a kind of cascading effect with the workers still\ngetting spawned by the launcher. However the workers finish by\nhandling relations which have nothing to do as I suspect that skipping\nthe anti-wraparound and non-aggressive jobs creates a reduction of the\nnumber of relation stat updates done via vac_update_relstats(), where\nindividual relations finish in a state where they consider that they\ncannot do any more work, being put in a state where they all consider\nnon-aggressive but anti-wraparound jobs as the norm, causing nothing\nto happen as we saw in the thread of -general mentioned by Andres\nupthread. And then things just loop over again and again. Note that\nit is also mentioned on the other thread that issuing a manual VACUUM\nFREEZE fixes temporarily the lockdown issue as it forces an update of\nthe relation stats. So it seems to me that reverting 2aa6e33 is a\nnecessary first step to prevent the lockdown to happen, still that\ndoes not address the actual issues causing anti-wraparound and\nnon-aggressive jobs to exist.\n--\nMichael", "msg_date": "Wed, 25 Mar 2020 17:29:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Autovacuum vs vac_update_datfrozenxid() vs ?" }, { "msg_contents": "On Mon, Mar 23, 2020 at 04:50:36PM -0700, Andres Freund wrote:\n> I think there's also another (even larger?) race in\n> vac_update_datfrozenxid(): Unless I miss something, two backends can\n> concurrently run through the scan in vac_update_datfrozenxid() for two\n> different tables in the same database, both can check that they need to\n> update the pg_database row, and then one of them can overwrite the\n> results of the other. And the one that updates last might actually be\n> the one with an older horizon. This is possible since there is no 'per\n> database' locking in heap_vacuum_rel().\n\nRight. This thread has a fix:\nhttps://www.postgresql.org/message-id/flat/20190218073103.GA1434723%40rfd.leadboat.com\n\nThe CF entry blocking it is starting to see some activity. (Your discovery\ninvolving lastSaneFrozenXid would need a separate fix.)\n\n\n", "msg_date": "Wed, 1 Apr 2020 22:03:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Autovacuum vs vac_update_datfrozenxid() vs ?" } ]
[ { "msg_contents": "Hey Hackers,\nGreetings!\nI am trying to submit a draft proposal for this task:\n\nDevelop Performance Farm Benchmarks and Website (2020)\n\n\nI had similar benchmarking for RocksDB in the past and I am very interested\nin doing similar task for Postgresql.\n\nPFA my report for RocksDB and let me know if you find it interesting.\n\nI hope this will be a good place to get started.\n\n\nRegards\n\nGyati", "msg_date": "Tue, 24 Mar 2020 00:06:45 -0700", "msg_from": "Gyati Mittal <gmittal4@horizon.csueastbay.edu>", "msg_from_op": true, "msg_subject": "Applying for GSOC 2020 | Need review of proposal" }, { "msg_contents": "Greetings,\n\n* Gyati Mittal (gmittal4@horizon.csueastbay.edu) wrote:\n> I am trying to submit a draft proposal for this task:\n> \n> Develop Performance Farm Benchmarks and Website (2020)\n\nGreat! I'd suggest you reach out to the mentor listed on the wiki page\nfor that project to chat about what a good proposal would look like.\n\nThanks,\n\nStephen", "msg_date": "Wed, 25 Mar 2020 09:46:25 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Applying for GSOC 2020 | Need review of proposal" } ]
[ { "msg_contents": "Hello John,\n\nthis looks like a nice feature. I'm wondering how it relates to the \nfollowing use-case:\n\nWhen drawing charts, the user can select pre-defined widths on times \n(like \"15 min\", \"1 hour\").\n\nThe data for these slots is fitted always to intervalls that start in 0 \nin the slot, e.g. if the user selects \"15 min\", the interval always \nstarts at xx:00, xx:15, xx:30 or xx:45. This is to aid caching of the \nresulting data, and so that requesting the same chart at xx:00 and xx:01 \nactually draws the same chart, and not some bar with only one minute \ndata at at the end.\n\nIn PSQL, this works out to using this as GROUP BY and then summing up \nall values:\n\n SELECT FLOOR(EXTRACT(EPOCH from thetime) / 3600) * 3600, SUM(events) \nFROM mytable ... GROUP BY 1;\n\nwhereas here 3600 means \"hourly\".\n\nIt is of course easy for things like \"1 hour\", but for yearly I had to \ncome up with things like:\n\n EXTRAC(YEAR FROM thetime) * 12 + EXTRACT(MONTH FROM thetime)\n\nand its gets even more involved for weeks, weekdays or odd things like \nfortnights.\n\nSo would that mean one could replace this by:\n\n date_trunc_interval('3600 seconds'::interval, thetime)\n\nand it would be easier, and (hopefully) faster?\n\nBest regards,\n\nTels\n\n\n", "msg_date": "Tue, 24 Mar 2020 14:34:52 +0100", "msg_from": "Tels <nospam-pg-abuse@bloodgate.com>", "msg_from_op": true, "msg_subject": "Re: truncating timestamps on arbitrary intervals" }, { "msg_contents": "On Tue, Mar 24, 2020 at 9:34 PM Tels <nospam-pg-abuse@bloodgate.com> wrote:\n>\n> Hello John,\n>\n> this looks like a nice feature. I'm wondering how it relates to the\n> following use-case:\n>\n> When drawing charts, the user can select pre-defined widths on times\n> (like \"15 min\", \"1 hour\").\n>\n> The data for these slots is fitted always to intervalls that start in 0\n> in the slot, e.g. if the user selects \"15 min\", the interval always\n> starts at xx:00, xx:15, xx:30 or xx:45. This is to aid caching of the\n> resulting data, and so that requesting the same chart at xx:00 and xx:01\n> actually draws the same chart, and not some bar with only one minute\n> data at at the end.\n\nHi Tels, thanks for your interest! Sounds like the exact use case I had in mind.\n\n> It is of course easy for things like \"1 hour\", but for yearly I had to\n> come up with things like:\n>\n> EXTRAC(YEAR FROM thetime) * 12 + EXTRACT(MONTH FROM thetime)\n>\n> and its gets even more involved for weeks, weekdays or odd things like\n> fortnights.\n\nTo be clear, this particular case was already handled by the existing\ndate_trunc, but only single units and a few other special cases. I\nunderstand if you have to write code to handle 15 minutes, you need to\nuse that structure for all cases.\n\nFortnight would be trivial:\n\ndate_trunc_interval('14 days'::interval, thetime [, optional start of\nthe fortnight])\n\n...but weekdays would still need something extra.\n\n> So would that mean one could replace this by:\n>\n> date_trunc_interval('3600 seconds'::interval, thetime)\n>\n> and it would be easier, and (hopefully) faster?\n\nCertainly easier and more flexible. It's hard to make guesses about\nperformance, but with your example above where you have two SQL\nfunction calls plus some expression evaluation, I think a single\nfunction would be faster in many cases.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 25 Mar 2020 18:32:05 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: truncating timestamps on arbitrary intervals" } ]
[ { "msg_contents": "Hello,\n\nI build the pdf (for HEAD) almost daily without problems, but at the \nmoment I get the error below.\n\nI am not sure whether to blame my particular setup (debian stretch), or \nwhether there might be an error in the .sgml. The html files still \nbuild OK.\n\nIf anyone has a suggestion on how to tackle this I'd be grateful.\n\nthanks,\n\nErik Rijkers\n\n\n\n[...]\n[INFO] FOUserAgent - Rendered page #526.\n[INFO] FOUserAgent - Rendered page #527.\n[INFO] FOUserAgent - Rendered page #528.\n[INFO] FOUserAgent - Rendered page #529.\n[[ERROR] FOP - Exception <org.apache.fop.apps.FOPException: \norg.apache.fop.fo.ValidationException: The column-number or number of \ncells in the row overflows the number of fo:table-columns specified for \nthe table. (See position 47337:52207)\njavax.xml.transform.TransformerException: \norg.apache.fop.fo.ValidationException: The column-number or number of \ncells in the row overflows the number of fo:table-columns specified for \nthe table. (See position 47337:52207)>org.apache.fop.apps.FOPException: \norg.apache.fop.fo.ValidationException: The column-number or number of \ncells in the row overflows the number of fo:table-columns specified for \nthe table. (See position 47337:52207)\njavax.xml.transform.TransformerException: \norg.apache.fop.fo.ValidationException: The column-number or number of \ncells in the row overflows the number of fo:table-columns specified for \nthe table. (See position 47337:52207)\n at \norg.apache.fop.cli.InputHandler.transformTo(InputHandler.java:289)\n at \norg.apache.fop.cli.InputHandler.renderTo(InputHandler.java:115)\n at org.apache.fop.cli.Main.startFOP(Main.java:186)\n at org.apache.fop.cli.Main.main(Main.java:217)\nCaused by: javax.xml.transform.TransformerException: \norg.apache.fop.fo.ValidationException: The column-number or number of \ncells in the row overflows the number of fo:table-columns specified for \nthe table. (See position 47337:52207)\n at \norg.apache.xalan.transformer.TransformerIdentityImpl.transform(TransformerIdentityImpl.java:502)\n at \norg.apache.fop.cli.InputHandler.transformTo(InputHandler.java:286)\n ... 3 more\nCaused by: org.apache.fop.fo.ValidationException: The column-number or \nnumber of cells in the row overflows the number of fo:table-columns \nspecified for the table. (See position 47337:52207)\n at \norg.apache.fop.events.ValidationExceptionFactory.createException(ValidationExceptionFactory.java:38)\n at \norg.apache.fop.events.EventExceptionManager.throwException(EventExceptionManager.java:58)\n at \norg.apache.fop.events.DefaultEventBroadcaster$1.invoke(DefaultEventBroadcaster.java:175)\n at com.sun.proxy.$Proxy4.tooManyCells(Unknown Source)\n at \norg.apache.fop.fo.flow.table.TableCellContainer.addTableCellChild(TableCellContainer.java:75)\n at \norg.apache.fop.fo.flow.table.TableRow.addChildNode(TableRow.java:95)\n at \norg.apache.fop.fo.FOTreeBuilder$MainFOHandler.startElement(FOTreeBuilder.java:324)\n at \norg.apache.fop.fo.FOTreeBuilder.startElement(FOTreeBuilder.java:179)\n at \norg.apache.xalan.transformer.TransformerIdentityImpl.startElement(TransformerIdentityImpl.java:1073)\n at \norg.apache.xerces.parsers.AbstractSAXParser.startElement(Unknown Source)\n at \norg.apache.xerces.xinclude.XIncludeHandler.startElement(Unknown Source)\n at \norg.apache.xerces.impl.XMLNSDocumentScannerImpl.scanStartElement(Unknown \nSource)\n at \norg.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown \nSource)\n at \norg.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown \nSource)\n at org.apache.xerces.parsers.XML11Configuration.parse(Unknown \nSource)\n at org.apache.xerces.parsers.XML11Configuration.parse(Unknown \nSource)\n at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)\n at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown \nSource)\n at \norg.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)\n at \norg.apache.xalan.transformer.TransformerIdentityImpl.transform(TransformerIdentityImpl.java:485)\n ... 4 more\n\n\n\n", "msg_date": "Tue, 24 Mar 2020 14:48:01 +0100", "msg_from": "Erikjan Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "documentation pdf build fail (HEAD)" }, { "msg_contents": "Ubuntu 18.04: no crash, but possibly a side effect:\n\n[INFO] FOUserAgent - Rendered page #2685.\n[INFO] FOUserAgent - Rendered page #2686.\n[INFO] FOUserAgent - Rendered page #2687.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"function-encode\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"function-decode\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-altercollation-notes-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-altertable-notes-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-createaggregate-notes-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-createindex-storage-parameters-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-createindex-concurrently-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-createtable-storage-parameters-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-createtable-compatibility-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-declare-notes-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-inserting-params-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-on-conflict-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-prepare-examples-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-reindex-concurrently-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-with-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-from-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-where-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-groupby-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-having-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-window-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-select-list-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-distinct-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-union-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-intersect-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-except-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-orderby-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-limit-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"sql-for-update-share-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"pg-dump-examples-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-patterns-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-variables-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-interpolation-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-prompting-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-environment-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-psql-examples-title\" found.\n[WARN] FOUserAgent - Destination: Unresolved ID reference \n\"app-postgres-single-user-title\" found.\n[INFO] FOUserAgent - Rendered page #2688.\n[WARN] FOUserAgent - Page 226: Unresolved ID reference \"function-decode\" \nfound.\n[WARN] FOUserAgent - Page 226: Unresolved ID reference \"function-encode\" \nfound.\n\nKind regards, J. Purtz\n\n\n\n", "msg_date": "Tue, 24 Mar 2020 15:07:01 +0100", "msg_from": "=?UTF-8?Q?J=c3=bcrgen_Purtz?= <juergen@purtz.de>", "msg_from_op": false, "msg_subject": "Re: documentation pdf build fail (HEAD)" }, { "msg_contents": "Erikjan Rijkers <er@xs4all.nl> writes:\n> I build the pdf (for HEAD) almost daily without problems, but at the \n> moment I get the error below.\n> I am not sure whether to blame my particular setup (debian stretch), or \n> whether there might be an error in the .sgml. The html files still \n> build OK.\n\nYeah, I see it too. The problem seems to be that cedffbdb8\nintroduced some broken table markup. I wonder why xmllint\nfailed to catch it? While catching morerows mistakes might be\nhard in general, it shouldn't have been difficult to notice that\nthis table row contained more columns than the table spec allowed.\n\n> If anyone has a suggestion on how to tackle this I'd be grateful.\n\nThe \"position\" noted in the error report seems to be a line number\nand column number in the .fo file. Once you go there and look around\nat surrounding text, you can locate the matching .sgml input and then\ntry to eyeball what's wrong with it.\n\nFix pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Mar 2020 10:31:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: documentation pdf build fail (HEAD)" }, { "msg_contents": "On 2020-03-24 15:31, Tom Lane wrote:\n> The problem seems to be that cedffbdb8\n> introduced some broken table markup. I wonder why xmllint\n> failed to catch it?\n\nIt's not a validity issue in the DocBook markup. The error comes from \nFOP, which complains because it requires the column count, but other \nprocessors don't necessarily require it.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 24 Mar 2020 15:46:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: documentation pdf build fail (HEAD)" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-03-24 15:31, Tom Lane wrote:\n>> The problem seems to be that cedffbdb8\n>> introduced some broken table markup. I wonder why xmllint\n>> failed to catch it?\n\n> It's not a validity issue in the DocBook markup. The error comes from \n> FOP, which complains because it requires the column count, but other \n> processors don't necessarily require it.\n\nMaybe not, but if the count is there, shouldn't it be checked?\n\nIn this particular case, the table was obviously broken if you looked\nat the rendered HTML, but I'd kind of expect the toolchain to provide\nbasic sanity checks without having to do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 24 Mar 2020 11:09:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: documentation pdf build fail (HEAD)" } ]
[ { "msg_contents": "Good afternoon!\n\nI would be grateful for some direction on how to use Background workers to\nhave a worker automatically restart *only* in certain cases, i.e. on\npostmaster start (_PG_init) or a soft crash. I run into all sorts of\ntrouble if I set bgw_restart_time to actually restart on sigterm, because\nin most cases I don't want it to restart (i.e. it was launched with invalid\nconfig, the SQL becomes invalid...). But I *do* want it to auto-restart in\nany kind of crash. If I set bgw_restart_time to never restart, then it\ndoesn't restart after a soft crash, which I want.\n\nThis is for my extension pglogical_ticker, and specifically within this\nmain function where a sigterm might happen:\nhttps://github.com/enova/pglogical_ticker/blob/ef9b68fd6b5b99787034520009577f8cfec0049c/pglogical_ticker.c#L85-L201\n\nI have tried several things unsuccessfully (checking result of SPI_execute\nor SPI_connect) , usually resulting in a constantly restarting and failing\nworker. So, is there a straightforward way to only have the worker\nauto-restart in a very narrow range of cases?\n\nMany thanks!\nJeremy\n\nGood afternoon!I would be grateful for some direction on how to use Background workers to have a worker automatically restart *only* in certain cases, i.e. on postmaster start (_PG_init) or a soft crash.  I run into all sorts of trouble if I set bgw_restart_time to actually restart on sigterm, because in most cases I don't want it to restart (i.e. it was launched with invalid config, the SQL becomes invalid...).  But I *do* want it to auto-restart in any kind of crash.  If I set bgw_restart_time to never restart, then it doesn't restart after a soft crash, which I want.This is for my extension pglogical_ticker, and specifically within this main function where a sigterm might happen:https://github.com/enova/pglogical_ticker/blob/ef9b68fd6b5b99787034520009577f8cfec0049c/pglogical_ticker.c#L85-L201I have tried several things unsuccessfully (checking result of SPI_execute or SPI_connect) , usually resulting in a constantly restarting and failing worker.  So, is there a straightforward way to only have the worker auto-restart in a very narrow range of cases?Many thanks!Jeremy", "msg_date": "Tue, 24 Mar 2020 13:32:49 -0500", "msg_from": "Jeremy Finzel <finzelj@gmail.com>", "msg_from_op": true, "msg_subject": "How to only auto-restart BGW only on crash or _PG_init" }, { "msg_contents": "On Tue, Mar 24, 2020 at 2:33 PM Jeremy Finzel <finzelj@gmail.com> wrote:\n> I would be grateful for some direction on how to use Background workers to have a worker automatically restart *only* in certain cases, i.e. on postmaster start (_PG_init) or a soft crash. I run into all sorts of trouble if I set bgw_restart_time to actually restart on sigterm, because in most cases I don't want it to restart (i.e. it was launched with invalid config, the SQL becomes invalid...). But I *do* want it to auto-restart in any kind of crash. If I set bgw_restart_time to never restart, then it doesn't restart after a soft crash, which I want.\n>\n> This is for my extension pglogical_ticker, and specifically within this main function where a sigterm might happen:\n> https://github.com/enova/pglogical_ticker/blob/ef9b68fd6b5b99787034520009577f8cfec0049c/pglogical_ticker.c#L85-L201\n>\n> I have tried several things unsuccessfully (checking result of SPI_execute or SPI_connect) , usually resulting in a constantly restarting and failing worker. So, is there a straightforward way to only have the worker auto-restart in a very narrow range of cases?\n\nI think what you can do is configure the worker to always restart, but\nthen have it exit(0) in the cases where you don't want it to restart,\nand exit(1) in the cases where you do want it to restart. See:\n\nhttps://git.postgresql.org/pg/commitdiff/be7558162acc5578d0b2cf0c8d4c76b6076ce352\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 24 Mar 2020 15:12:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to only auto-restart BGW only on crash or _PG_init" } ]
[ { "msg_contents": "Dear Sir/Madam,\n\nI am very much interested in working on a project of PostgreSQL for Google\nsummer internship. While I was writing a proposal, I came across some\nguidelines by the company to get in touch about the nature of the project\nand then draft the proposal. I would be very much interested in learning\nmore about the project so I can come up with a reasonable proposal.\n\nBest regards,\nMaryam Farrukh\n\nDear Sir/Madam,I am very much interested in working on a project of PostgreSQL for Google summer internship. While I was writing a proposal, I came across some guidelines by the company to get in touch about the nature of the project and then draft the proposal. I would be very much interested in learning more about the project so I can come up with a reasonable proposal.Best regards,Maryam Farrukh", "msg_date": "Tue, 24 Mar 2020 22:25:46 +0100", "msg_from": "Maryam Farrukh <maryam.farrukh1995@gmail.com>", "msg_from_op": true, "msg_subject": "PostgreSQL proposal of Google Summer of Code" }, { "msg_contents": "On Tue, Mar 24, 2020 at 7:07 PM Maryam Farrukh <maryam.farrukh1995@gmail.com>\nwrote:\n\n> Dear Sir/Madam,\n>\n> I am very much interested in working on a project of PostgreSQL for Google\n> summer internship. While I was writing a proposal, I came across some\n> guidelines by the company to get in touch about the nature of the project\n> and then draft the proposal. I would be very much interested in learning\n> more about the project so I can come up with a reasonable proposal.\n>\n>\nHi Maryam,\n\nYou can start having a look on the following links:\nhttps://wiki.postgresql.org/wiki/GSoC\nhttps://wiki.postgresql.org/wiki/GSoC_2020\n\nAs an old PostgreSQL GSoC student I can tell you it's an amazing experience.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Tue, Mar 24, 2020 at 7:07 PM Maryam Farrukh <maryam.farrukh1995@gmail.com> wrote:Dear Sir/Madam,I am very much interested in working on a project of PostgreSQL for Google summer internship. While I was writing a proposal, I came across some guidelines by the company to get in touch about the nature of the project and then draft the proposal. I would be very much interested in learning more about the project so I can come up with a reasonable proposal.Hi Maryam,You can start having a look on the following links:https://wiki.postgresql.org/wiki/GSoChttps://wiki.postgresql.org/wiki/GSoC_2020As an old PostgreSQL GSoC student I can tell you it's an amazing experience.Regards,--    Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Tue, 24 Mar 2020 19:22:53 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL proposal of Google Summer of Code" }, { "msg_contents": "On Sun, Mar 29, 2020 at 4:48 PM Maryam Farrukh <maryam.farrukh1995@gmail.com>\nwrote:\n>\n> Hi Fabrizio,\n>\n\nHi Maryam, please try to avoid top posting!!\n\nReturning the discussion to pgsql-hackers.\n\n\n> Thank you for reaching out. I have a question. I went through the links\nyou provided me. There\n> are already some project ideas over ther. My question is that do we have\nto select a project from\n> there or come up with an idea of our own.\n>\n\nIf you have a good idea and that idea fit to GSoC felt free to propose\nit... the listed ideas are just ideas.\n\nRegards,\n\n--\n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Sun, Mar 29, 2020 at 4:48 PM Maryam Farrukh <maryam.farrukh1995@gmail.com> wrote:>> Hi Fabrizio,>Hi Maryam, please try to avoid top posting!!Returning the discussion to pgsql-hackers. > Thank you for reaching out. I have a question. I went through the links you provided me. There > are already some project ideas over ther. My question is that do we have to select a project from > there or come up with an idea of our own. >If you have a good idea and that idea fit to GSoC felt free to propose it... the listed ideas are just ideas.Regards,--   Fabrízio de Royes Mello         Timbira - http://www.timbira.com.br/   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento", "msg_date": "Mon, 30 Mar 2020 09:18:06 -0300", "msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL proposal of Google Summer of Code" } ]
[ { "msg_contents": "Hi\n\n\n\n From the PG logical replication documentation, I see that there is a listed limitation that sequence relation is not replicated logically. After some examination, I see that retrieving the next value from a sequence using the nextval() call will emits a WAL update every 32 calls to nextval(). In fact, when it emits a WAL update, it will write a future value 32 increments from now, and maintain a internal cache for delivering sequence numbers. It is done this way to minimize the write operation to WAL record at a risk of losing some values during a crash. So if we were to replicate the sequence, the subscriber will receive a future value (32 calls to nextval()) from now, and it obviously does not reflect current status. Sequence changes caused by other sequence-related SQL functions like setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so replicating changes caused by these should not be a problem. \n\n\n\nI have shared a patch that allows sequence relation to be supported in logical replication via the decoding plugin ( test_decoding for example ); it does not support sequence relation in logical replication between a PG publisher and a PG subscriber via pgoutput plugin as it will require much more work. For the replication to make sense, the patch actually disables the WAL update at every 32 nextval() calls, so every call to nextval() will emit a WAL update for proper replication. This is done by setting SEQ_LOG_VALS to 0 in sequence.c\n\n\n\nI think the question is that should we minimize WAL update frequency (every 32 calls) for getting next value in a sequence at a cost of losing values during crash or being able to replicate a sequence relation properly at a cost or more WAL updates?\n\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca", "msg_date": "Tue, 24 Mar 2020 16:19:21 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Include sequence relation support in logical replication" }, { "msg_contents": "On 2020-03-24 16:19:21 -0700, Cary Huang wrote:\n> Hi\n> \n> \n> \n> From the PG logical replication documentation, I see that there is a\n> listed limitation that sequence relation is not replicated\n> logically. After some examination, I see that retrieving the next\n> value from a sequence using the nextval() call will emits a WAL update\n> every 32 calls to nextval().�In fact, when it emits a WAL update, it\n> will write a future value 32 increments from now, and maintain a\n> internal cache for delivering sequence numbers. It is done this way to\n> minimize the write operation to WAL record at a risk of losing some\n> values during a crash. So if we were to replicate the sequence, the\n> subscriber will receive a future value (32 calls to nextval()) from\n> now, and it obviously does not reflect current status. Sequence\n> changes caused by other sequence-related SQL functions like setval()\n> or ALTER SEQUENCE xxx, will always emit a WAL update, so replicating\n> changes caused by these should not be a problem.�\n> \n> \n> \n> I have shared a patch that allows sequence relation to be supported in logical replication via the decoding plugin ( test_decoding for example ); it does not support sequence relation in logical replication between a PG publisher and a PG subscriber via pgoutput plugin as it will require much more work. For the replication to make sense, the patch actually disables the WAL update at every 32 nextval() calls, so every call to nextval() will emit a WAL update for proper replication. This is done by setting SEQ_LOG_VALS to 0 in sequence.c\n> \n> \n> \n> I think the question is that should we minimize WAL update frequency (every 32 calls) for getting next value in a sequence at a cost of losing values during crash or being able to replicate a sequence relation properly at a cost or more WAL updates?\n> \n> \n> \n> \n> \n> Cary Huang\n> \n> -------------\n> \n> HighGo Software Inc. (Canada)\n> \n> mailto:cary.huang@highgo.ca\n> \n> http://www.highgo.ca\n\n> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c\n> index 93c948856e..7a7e572d6c 100644\n> --- a/contrib/test_decoding/test_decoding.c\n> +++ b/contrib/test_decoding/test_decoding.c\n> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> \t\t\t\t\t\t\t\t\t&change->data.tp.oldtuple->tuple,\n> \t\t\t\t\t\t\t\t\ttrue);\n> \t\t\tbreak;\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\t\t\tappendStringInfoString(ctx->out, \" SEQUENCE:\");\n> +\t\t\t\t\tif (change->data.sequence.newtuple == NULL)\n> +\t\t\t\t\t\tappendStringInfoString(ctx->out, \" (no-tuple-data)\");\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\ttuple_to_stringinfo(ctx->out, tupdesc,\n> +\t\t\t\t\t\t\t\t\t\t\t&change->data.sequence.newtuple->tuple,\n> +\t\t\t\t\t\t\t\t\t\t\tfalse);\n> +\t\t\t\t\tbreak;\n> \t\tdefault:\n> \t\t\tAssert(false);\n> \t}\n> diff --git a/src/backend/commands/sequence.c b/src/backend/commands/sequence.c\n> index 6aab73bfd4..941015e4aa 100644\n> --- a/src/backend/commands/sequence.c\n> +++ b/src/backend/commands/sequence.c\n> @@ -49,11 +49,10 @@\n> \n> \n> /*\n> - * We don't want to log each fetching of a value from a sequence,\n> - * so we pre-log a few fetches in advance. In the event of\n> - * crash we can lose (skip over) as many values as we pre-logged.\n> + * Sequence replication is now supported and we will now need to log each sequence\n> + * update to WAL such that the standby can properly receive the sequence change\n> */\n> -#define SEQ_LOG_VALS\t32\n> +#define SEQ_LOG_VALS\t0\n> \n> /*\n> * The \"special area\" of a sequence's buffer page looks like this.\n> diff --git a/src/backend/replication/logical/decode.c b/src/backend/replication/logical/decode.c\n> index c2e5e3abf8..3dc14ead08 100644\n> --- a/src/backend/replication/logical/decode.c\n> +++ b/src/backend/replication/logical/decode.c\n> @@ -42,6 +42,7 @@\n> #include \"replication/reorderbuffer.h\"\n> #include \"replication/snapbuild.h\"\n> #include \"storage/standby.h\"\n> +#include \"commands/sequence.h\"\n> \n> typedef struct XLogRecordBuffer\n> {\n> @@ -70,9 +71,11 @@ static void DecodeCommit(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,\n> \t\t\t\t\t\t xl_xact_parsed_commit *parsed, TransactionId xid);\n> static void DecodeAbort(LogicalDecodingContext *ctx, XLogRecordBuffer *buf,\n> \t\t\t\t\t\txl_xact_parsed_abort *parsed, TransactionId xid);\n> +static void DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf);\n> \n> /* common function to decode tuples */\n> static void DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tup);\n> +static void DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple);\n> \n> /*\n> * Take every XLogReadRecord()ed record and perform the actions required to\n> @@ -130,6 +133,10 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor\n> \t\t\tDecodeLogicalMsgOp(ctx, &buf);\n> \t\t\tbreak;\n> \n> +\t\tcase RM_SEQ_ID:\n> +\t\t\tDecodeSequence(ctx, &buf);\n> +\t\t\tbreak;\n> +\n> \t\t\t/*\n> \t\t\t * Rmgrs irrelevant for logical decoding; they describe stuff not\n> \t\t\t * represented in logical decoding. Add new rmgrs in rmgrlist.h's\n> @@ -145,7 +152,6 @@ LogicalDecodingProcessRecord(LogicalDecodingContext *ctx, XLogReaderState *recor\n> \t\tcase RM_HASH_ID:\n> \t\tcase RM_GIN_ID:\n> \t\tcase RM_GIST_ID:\n> -\t\tcase RM_SEQ_ID:\n> \t\tcase RM_SPGIST_ID:\n> \t\tcase RM_BRIN_ID:\n> \t\tcase RM_COMMIT_TS_ID:\n> @@ -1052,3 +1058,80 @@ DecodeXLogTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)\n> \theader->t_infomask2 = xlhdr.t_infomask2;\n> \theader->t_hoff = xlhdr.t_hoff;\n> }\n> +\n> +/*\n> + * Decode Sequence Tuple\n> + */\n> +static void\n> +DecodeSeqTuple(char *data, Size len, ReorderBufferTupleBuf *tuple)\n> +{\n> +\tint\t\t\tdatalen = len - sizeof(xl_seq_rec) - SizeofHeapTupleHeader;\n> +\n> +\tAssert(datalen >= 0);\n> +\n> +\ttuple->tuple.t_len = datalen + SizeofHeapTupleHeader;;\n> +\n> +\tItemPointerSetInvalid(&tuple->tuple.t_self);\n> +\n> +\ttuple->tuple.t_tableOid = InvalidOid;\n> +\n> +\tmemcpy(((char *) tuple->tuple.t_data),\n> +\t\t data + sizeof(xl_seq_rec),\n> +\t\t SizeofHeapTupleHeader);\n> +\n> +\tmemcpy(((char *) tuple->tuple.t_data) + SizeofHeapTupleHeader,\n> +\t\t data + sizeof(xl_seq_rec) + SizeofHeapTupleHeader,\n> +\t\t datalen);\n> +}\n> +\n> +/*\n> + * Handle sequence decode\n> + */\n> +static void\n> +DecodeSequence(LogicalDecodingContext *ctx, XLogRecordBuffer *buf)\n> +{\n> +\tReorderBufferChange *change;\n> +\tRelFileNode target_node;\n> +\tXLogReaderState *r = buf->record;\n> +\tchar\t *tupledata = NULL;\n> +\tSize\t\ttuplelen;\n> +\tSize\t\tdatalen = 0;\n> +\tuint8\t\tinfo = XLogRecGetInfo(buf->record) & ~XLR_INFO_MASK;\n> +\n> +\t/* only decode changes flagged with XLOG_SEQ_LOG */\n> +\tif (info != XLOG_SEQ_LOG)\n> +\t\treturn;\n> +\n> +\t/* only interested in our database */\n> +\tXLogRecGetBlockTag(r, 0, &target_node, NULL, NULL);\n> +\tif (target_node.dbNode != ctx->slot->data.database)\n> +\t\treturn;\n> +\n> +\t/* output plugin doesn't look for this origin, no need to queue */\n> +\tif (FilterByOrigin(ctx, XLogRecGetOrigin(r)))\n> +\t\treturn;\n> +\n> +\tchange = ReorderBufferGetChange(ctx->reorder);\n> +\tchange->action = REORDER_BUFFER_CHANGE_SEQUENCE;\n> +\tchange->origin_id = XLogRecGetOrigin(r);\n> +\n> +\tmemcpy(&change->data.sequence.relnode, &target_node, sizeof(RelFileNode));\n> +\n> +\ttupledata = XLogRecGetData(r);\n> +\tdatalen = XLogRecGetDataLen(r);\n> +\n> +\tif(!datalen || !tupledata)\n> +\t\treturn;\n> +\n> +\ttuplelen = datalen - SizeOfHeapHeader - sizeof(xl_seq_rec);\n> +\n> +\tchange->data.sequence.newtuple =\n> +\t\tReorderBufferGetTupleBuf(ctx->reorder, tuplelen);\n> +\n> +\tDecodeSeqTuple(tupledata, datalen, change->data.sequence.newtuple);\n> +\n> +\tReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr);\n> +\n> +\tReorderBufferQueueChange(ctx->reorder, XLogRecGetXid(r), buf->origptr, change);\n> +\n> +}\n> diff --git a/src/backend/replication/logical/reorderbuffer.c b/src/backend/replication/logical/reorderbuffer.c\n> index 481277a1fd..24f2cdf51d 100644\n> --- a/src/backend/replication/logical/reorderbuffer.c\n> +++ b/src/backend/replication/logical/reorderbuffer.c\n> @@ -474,6 +474,13 @@ ReorderBufferReturnChange(ReorderBuffer *rb, ReorderBufferChange *change)\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:\n> \t\t\tbreak;\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\tif (change->data.sequence.newtuple)\n> +\t\t\t{\n> +\t\t\t\tReorderBufferReturnTupleBuf(rb, change->data.sequence.newtuple);\n> +\t\t\t\tchange->data.sequence.newtuple = NULL;\n> +\t\t\t}\n> +\t\t\tbreak;\n> \t}\n> \n> \tpfree(change);\n> @@ -1833,6 +1840,38 @@ ReorderBufferCommit(ReorderBuffer *rb, TransactionId xid,\n> \t\t\t\tcase REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:\n> \t\t\t\t\telog(ERROR, \"tuplecid value in changequeue\");\n> \t\t\t\t\tbreak;\n> +\t\t\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\t\t\tAssert(snapshot_now);\n> +\n> +\t\t\t\t\treloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,\n> +\t\t\t\t\t\t\t\t\t\t\t\tchange->data.sequence.relnode.relNode);\n> +\n> +\t\t\t\t\tif (reloid == InvalidOid &&\n> +\t\t\t\t\t\tchange->data.sequence.newtuple == NULL)\n> +\t\t\t\t\t\tgoto change_done;\n> +\t\t\t\t\telse if (reloid == InvalidOid)\n> +\t\t\t\t\t\telog(ERROR, \"could not map filenode \\\"%s\\\" to relation OID\",\n> +\t\t\t\t\t\t\t relpathperm(change->data.tp.relnode,\n> +\t\t\t\t\t\t\t\t\t\t MAIN_FORKNUM));\n> +\n> +\t\t\t\t\trelation = RelationIdGetRelation(reloid);\n> +\n> +\t\t\t\t\tif (!RelationIsValid(relation))\n> +\t\t\t\t\t\telog(ERROR, \"could not open relation with OID %u (for filenode \\\"%s\\\")\",\n> +\t\t\t\t\t\t\t reloid,\n> +\t\t\t\t\t\t\t relpathperm(change->data.sequence.relnode,\n> +\t\t\t\t\t\t\t\t\t\t MAIN_FORKNUM));\n> +\n> +\t\t\t\t\tif (!RelationIsLogicallyLogged(relation))\n> +\t\t\t\t\t\tgoto change_done;\n> +\n> +\t\t\t\t\t/* user-triggered change */\n> +\t\t\t\t\tif (!IsToastRelation(relation))\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tReorderBufferToastReplace(rb, txn, relation, change);\n> +\t\t\t\t\t\trb->apply_change(rb, txn, relation, change);\n> +\t\t\t\t\t}\n> +\t\t\t\t\tbreak;\n> \t\t\t}\n> \t\t}\n> \n> @@ -2516,15 +2555,23 @@ ReorderBufferSerializeChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> \t\tcase REORDER_BUFFER_CHANGE_UPDATE:\n> \t\tcase REORDER_BUFFER_CHANGE_DELETE:\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT:\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> \t\t\t{\n> \t\t\t\tchar\t *data;\n> \t\t\t\tReorderBufferTupleBuf *oldtup,\n> \t\t\t\t\t\t *newtup;\n> \t\t\t\tSize\t\toldlen = 0;\n> \t\t\t\tSize\t\tnewlen = 0;\n> -\n> -\t\t\t\toldtup = change->data.tp.oldtuple;\n> -\t\t\t\tnewtup = change->data.tp.newtuple;\n> +\t\t\t\tif (change->action == REORDER_BUFFER_CHANGE_SEQUENCE)\n> +\t\t\t\t{\n> +\t\t\t\t\toldtup = NULL;\n> +\t\t\t\t\tnewtup = change->data.sequence.newtuple;\n> +\t\t\t\t}\n> +\t\t\t\telse\n> +\t\t\t\t{\n> +\t\t\t\t\toldtup = change->data.tp.oldtuple;\n> +\t\t\t\t\tnewtup = change->data.tp.newtuple;\n> +\t\t\t\t}\n> \n> \t\t\t\tif (oldtup)\n> \t\t\t\t{\n> @@ -2707,14 +2754,23 @@ ReorderBufferChangeSize(ReorderBufferChange *change)\n> \t\tcase REORDER_BUFFER_CHANGE_UPDATE:\n> \t\tcase REORDER_BUFFER_CHANGE_DELETE:\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT:\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> \t\t\t{\n> \t\t\t\tReorderBufferTupleBuf *oldtup,\n> \t\t\t\t\t\t *newtup;\n> \t\t\t\tSize\t\toldlen = 0;\n> \t\t\t\tSize\t\tnewlen = 0;\n> \n> -\t\t\t\toldtup = change->data.tp.oldtuple;\n> -\t\t\t\tnewtup = change->data.tp.newtuple;\n> +\t\t\t\tif (change->action == REORDER_BUFFER_CHANGE_SEQUENCE)\n> +\t\t\t\t{\n> +\t\t\t\t\toldtup = NULL;\n> +\t\t\t\t\tnewtup = change->data.sequence.newtuple;\n> +\t\t\t\t}\n> +\t\t\t\telse\n> +\t\t\t\t{\n> +\t\t\t\t\toldtup = change->data.tp.oldtuple;\n> +\t\t\t\t\tnewtup = change->data.tp.newtuple;\n> +\t\t\t\t}\n> \n> \t\t\t\tif (oldtup)\n> \t\t\t\t{\n> @@ -3048,6 +3104,32 @@ ReorderBufferRestoreChange(ReorderBuffer *rb, ReorderBufferTXN *txn,\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_COMMAND_ID:\n> \t\tcase REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID:\n> \t\t\tbreak;\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\tif (change->data.sequence.newtuple)\n> +\t\t\t{\n> +\t\t\t\t/* here, data might not be suitably aligned! */\n> +\t\t\t\tuint32\t\ttuplelen;\n> +\n> +\t\t\t\tmemcpy(&tuplelen, data + offsetof(HeapTupleData, t_len),\n> +\t\t\t\t\t sizeof(uint32));\n> +\n> +\t\t\t\tchange->data.sequence.newtuple =\n> +\t\t\t\t\tReorderBufferGetTupleBuf(rb, tuplelen - SizeofHeapTupleHeader);\n> +\n> +\t\t\t\t/* restore ->tuple */\n> +\t\t\t\tmemcpy(&change->data.sequence.newtuple->tuple, data,\n> +\t\t\t\t\t sizeof(HeapTupleData));\n> +\t\t\t\tdata += sizeof(HeapTupleData);\n> +\n> +\t\t\t\t/* reset t_data pointer into the new tuplebuf */\n> +\t\t\t\tchange->data.sequence.newtuple->tuple.t_data =\n> +\t\t\t\t\tReorderBufferTupleBufData(change->data.tp.newtuple);\n> +\n> +\t\t\t\t/* restore tuple data itself */\n> +\t\t\t\tmemcpy(change->data.sequence.newtuple->tuple.t_data, data, tuplelen);\n> +\t\t\t\tdata += tuplelen;\n> +\t\t\t}\n> +\t\t\tbreak;\n> \t}\n> \n> \tdlist_push_tail(&txn->changes, &change->node);\n> diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h\n> index 626ecf4dc9..cf3fd45c5f 100644\n> --- a/src/include/replication/reorderbuffer.h\n> +++ b/src/include/replication/reorderbuffer.h\n> @@ -62,7 +62,8 @@ enum ReorderBufferChangeType\n> \tREORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,\n> \tREORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,\n> \tREORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,\n> -\tREORDER_BUFFER_CHANGE_TRUNCATE\n> +\tREORDER_BUFFER_CHANGE_TRUNCATE,\n> +\tREORDER_BUFFER_CHANGE_SEQUENCE,\n> };\n> \n> /* forward declaration */\n> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange\n> \t\t\tCommandId\tcmax;\n> \t\t\tCommandId\tcombocid;\n> \t\t}\t\t\ttuplecid;\n> +\t\t/*\n> +\t\t * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one\n> +\t\t * set of relations to be truncated.\n> +\t\t */\n> +\t\tstruct\n> +\t\t{\n> +\t\t\tRelFileNode relnode;\n> +\t\t\tReorderBufferTupleBuf *newtuple;\n> +\t\t}\t\t\tsequence;\n> \t}\t\t\tdata;\n> \n> \t/*\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 24 Mar 2020 17:44:39 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi,\n\nOn 2020-03-24 16:19:21 -0700, Cary Huang wrote:\n> I have shared a patch that allows sequence relation to be supported in\n> logical replication via the decoding plugin ( test_decoding for\n> example ); it does not support sequence relation in logical\n> replication between a PG publisher and a PG subscriber via pgoutput\n> plugin as it will require much more work.\n\nCould you expand on \"much more work\"? Once decoding support is there,\nthat shouldn't be that much?\n\n\n> Sequence changes caused by other sequence-related SQL functions like\n> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so\n> replicating changes caused by these should not be a problem.\n\nI think this really would need to handle at the very least setval to\nmake sense.\n\n\n> For the replication to make sense, the patch actually disables the WAL\n> update at every 32 nextval() calls, so every call to nextval() will\n> emit a WAL update for proper replication. This is done by setting\n> SEQ_LOG_VALS to 0 in sequence.c\n\nWhy is that needed? ISTM updating in increments of 32 is fine for\nreplication purposes? It's good imo, because sending out more granular\nincrements would increase the size of the WAL stream?\n\n\n\n> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c\n> index 93c948856e..7a7e572d6c 100644\n> --- a/contrib/test_decoding/test_decoding.c\n> +++ b/contrib/test_decoding/test_decoding.c\n> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn,\n> \t\t\t\t\t\t\t\t\t&change->data.tp.oldtuple->tuple,\n> \t\t\t\t\t\t\t\t\ttrue);\n> \t\t\tbreak;\n> +\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\t\t\tappendStringInfoString(ctx->out, \" SEQUENCE:\");\n> +\t\t\t\t\tif (change->data.sequence.newtuple == NULL)\n> +\t\t\t\t\t\tappendStringInfoString(ctx->out, \" (no-tuple-data)\");\n> +\t\t\t\t\telse\n> +\t\t\t\t\t\ttuple_to_stringinfo(ctx->out, tupdesc,\n> +\t\t\t\t\t\t\t\t\t\t\t&change->data.sequence.newtuple->tuple,\n> +\t\t\t\t\t\t\t\t\t\t\tfalse);\n> +\t\t\t\t\tbreak;\n> \t\tdefault:\n> \t\t\tAssert(false);\n> \t}\n\nYou should also add tests - the main purpose of contrib/test_decoding is\nto facilitate writing those...\n\n\n> +\tReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr);\n\nHuh, why are you doing this? That's going to increase overhead of logical\ndecoding by many times?\n\n\n> +\t\t\t\tcase REORDER_BUFFER_CHANGE_SEQUENCE:\n> +\t\t\t\t\tAssert(snapshot_now);\n> +\n> +\t\t\t\t\treloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode,\n> +\t\t\t\t\t\t\t\t\t\t\t\tchange->data.sequence.relnode.relNode);\n> +\n> +\t\t\t\t\tif (reloid == InvalidOid &&\n> +\t\t\t\t\t\tchange->data.sequence.newtuple == NULL)\n> +\t\t\t\t\t\tgoto change_done;\n\nI don't think this path should be needed? There's afaict no valid ways\nwe should be able to end up here without a tuple?\n\n\n> +\t\t\t\t\tif (!RelationIsLogicallyLogged(relation))\n> +\t\t\t\t\t\tgoto change_done;\n\nSimilarly, this seems superflous and should perhaps be an assertion?\n\n> +\t\t\t\t\t/* user-triggered change */\n> +\t\t\t\t\tif (!IsToastRelation(relation))\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tReorderBufferToastReplace(rb, txn, relation, change);\n> +\t\t\t\t\t\trb->apply_change(rb, txn, relation, change);\n> +\t\t\t\t\t}\n> +\t\t\t\t\tbreak;\n> \t\t\t}\n> \t\t}\n>\n\nThis doesn't make sense either.\n\n\n\n> diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h\n> index 626ecf4dc9..cf3fd45c5f 100644\n> --- a/src/include/replication/reorderbuffer.h\n> +++ b/src/include/replication/reorderbuffer.h\n> @@ -62,7 +62,8 @@ enum ReorderBufferChangeType\n> \tREORDER_BUFFER_CHANGE_INTERNAL_TUPLECID,\n> \tREORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT,\n> \tREORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM,\n> -\tREORDER_BUFFER_CHANGE_TRUNCATE\n> +\tREORDER_BUFFER_CHANGE_TRUNCATE,\n> +\tREORDER_BUFFER_CHANGE_SEQUENCE,\n> };\n>\n> /* forward declaration */\n> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange\n> \t\t\tCommandId\tcmax;\n> \t\t\tCommandId\tcombocid;\n> \t\t}\t\t\ttuplecid;\n> +\t\t/*\n> +\t\t * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one\n> +\t\t * set of relations to be truncated.\n> +\t\t */\n\nWhat?\n\n> +\t\tstruct\n> +\t\t{\n> +\t\t\tRelFileNode relnode;\n> +\t\t\tReorderBufferTupleBuf *newtuple;\n> +\t\t}\t\t\tsequence;\n> \t}\t\t\tdata;\n>\n> \t/*\n\nI don't think we should expose sequence changes via their tuples -\nthat'll unnecessarily expose a lot of implementation details.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:27:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "On Wed, Mar 25, 2020 at 12:27:28PM -0700, Andres Freund wrote:\n> On 2020-03-24 16:19:21 -0700, Cary Huang wrote:\n>> For the replication to make sense, the patch actually disables the WAL\n>> update at every 32 nextval() calls, so every call to nextval() will\n>> emit a WAL update for proper replication. This is done by setting\n>> SEQ_LOG_VALS to 0 in sequence.c\n> \n> Why is that needed? ISTM updating in increments of 32 is fine for\n> replication purposes? It's good imo, because sending out more granular\n> increments would increase the size of the WAL stream?\n\nOnce upon a time, I was looking at the effects of playing with the\nlimit of a WAL record generated every 32 increments for a sequence,\nand the performance difference is huge and noticeable.\n--\nMichael", "msg_date": "Thu, 26 Mar 2020 15:56:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi Andres\n\n\n\nthanks for your reply and your patch review. Please see my comments below\n\n\n\n>On 2020-03-24 16:19:21 -0700, Cary Huang wrote: \n\n>> I have shared a patch that allows sequence relation to be supported in \n\n>> logical replication via the decoding plugin ( test_decoding for \n\n>> example ); it does not support sequence relation in logical \n\n>> replication between a PG publisher and a PG subscriber via pgoutput \n\n>> plugin as it will require much more work. \n\n>\n\n> Could you expand on \"much more work\"? Once decoding support is there, \n\n> that shouldn't be that much? \n\n\n\nBy much more work, I meant more source files will need to be changed to have sequence replication \n\nsupported between a PG subscriber and publisher using pgoutput plugin. About 10 more source file changes.\n\nIdea is similar though.\n\n\n\n\n>> Sequence changes caused by other sequence-related SQL functions like \n\n>> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so \n\n>> replicating changes caused by these should not be a problem. \n\n>\n\n> I think this really would need to handle at the very least setval to \n\n> make sense. \n\n\n\nyes, sure\n\n\n\n>> For the replication to make sense, the patch actually disables the WAL \n\n>> update at every 32 nextval() calls, so every call to nextval() will \n\n>> emit a WAL update for proper replication. This is done by setting \n\n>> SEQ_LOG_VALS to 0 in sequence.c \n\n>\n\n> Why is that needed? ISTM updating in increments of 32 is fine for \n\n> replication purposes? It's good imo, because sending out more granular \n\n> increments would increase the size of the WAL stream? \n\n\n\nyes, updating WAL at every 32 increment is good and have huge performance benefits according \n\nto Michael, but when it is replicated logically to subscribers, the sequence value they receive would not \n\nmake much sense to them.\n\nFor example, \n\n\n\nIf i have a Sequence called \"seq\" with current value = 100 and increment = 5. The nextval('seq') call will\n\nreturn 105 to the client but it will write 260 to WAL record ( 100 + (5*32) ), because that is the value after 32\n\nincrements and internally it is also maintaining a \"log_cnt\" counter that tracks how many nextval() calls have been invoked\n\nsince the last WAL write, so it could kind of derive backwards to find the proper next value to return to client. \n\n\n\nBut the subscriber for this sequence will receive a change of 260 instead of 105, and it does not represent the current\n\nsequence status. Subscriber is not able to derive backwards because it does not know the increment size in its schema.\n\n\n\nSetting SEQ_LOG_VALS to 0 in sequence.c basically disables this 32 increment behavior and makes WAL update at every nextval() call\n\nand therefore the subscriber to this sequence will receive the same value (105) as the client, as a cost of more WAL writes.\n\n\n\nI would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n\ndisabling the 32 increment behavior?\n\n\n\n>> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c \n\n>> index 93c948856e..7a7e572d6c 100644 \n\n>> --- a/contrib/test_decoding/test_decoding.c \n\n>> +++ b/contrib/test_decoding/test_decoding.c \n\n>> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, \n\n>>                                     &change->data.tp.oldtuple->tuple, \n\n>>                                     true); \n\n>>             break; \n\n>> +        case REORDER_BUFFER_CHANGE_SEQUENCE: \n\n>> +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); \n\n>> +                    if (change->data.sequence.newtuple == NULL) \n\n>> +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); \n\n>> +                    else \n\n>> +                        tuple_to_stringinfo(ctx->out, tupdesc, \n\n>> +                                            &change->data.sequence.newtuple->tuple, \n\n>> +                                            false); \n\n>> +                    break; \n\n>>         default: \n\n>>             Assert(false); \n\n>>     } \n\n>\n\n> You should also add tests - the main purpose of contrib/test_decoding is \n\n> to facilitate writing those... \n\n\n\nthanks, I will add\n\n\n\n\n>> +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); \n\n>\n\n> Huh, why are you doing this? That's going to increase overhead of logical \n\n> decoding by many times? \n\n\n\nThis is needed to allow snapshot to be built inside DecodeCommit() function. Regular changes caused by INSERT also has this \n\nfunction called so I assume it is needed to ensure proper decoding. Without this, a snapshot will not be built and the change \n\ntransaction will not be logged\n\n\n\n\n>> +                case REORDER_BUFFER_CHANGE_SEQUENCE: \n\n>> +                    Assert(snapshot_now); \n\n>> + \n\n>> +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, \n\n>> +                                                change->data.sequence.relnode.relNode); \n\n>> + \n\n>> +                    if (reloid == InvalidOid && \n\n>> +                        change->data.sequence.newtuple == NULL) \n\n>> +                        goto change_done; \n\n>\n\n> I don't think this path should be needed? There's afaict no valid ways \n\n> we should be able to end up here without a tuple? \n\n\n\nyeah you are right, I can remove the tuple check\n\n\n\n>> +                    if (!RelationIsLogicallyLogged(relation)) \n\n>> +                        goto change_done; \n\n>\n\n> Similarly, this seems superflous and should perhaps be an assertion? \n\n\n\nI think it should be ok to check a relation like this, because it also will check the persistence of the relation and whether\n\nwal_level is set to 'logical'. It is commonly used in the regular INSERT cases so I thought it would make sense to use it\n\nfor sequence.\n\n\n\n>> +                    /* user-triggered change */ \n\n>> +                    if (!IsToastRelation(relation)) \n\n>> +                    { \n\n>> +                        ReorderBufferToastReplace(rb, txn, relation, change); \n\n>> +                        rb->apply_change(rb, txn, relation, change); \n\n>> +                    } \n\n>> +                    break; \n\n>>             } \n\n>>         } \n\n>> \n\n>\n\n> This doesn't make sense either. \n\n\n\nagreed, it should not be here.\n\n\n\n>> /* forward declaration */ \n\n>> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange \n\n>>             CommandId    cmax; \n\n>>             CommandId    combocid; \n\n>>         }            tuplecid; \n\n>> +        /* \n\n>> +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one \n\n>> +         * set of relations to be truncated. \n\n>> +         */ \n\n>\n\n> What? \n\n\n\nWill fix the comment\n\n\n\n\n>> +        struct \n\n>> +        { \n\n>> +            RelFileNode relnode; \n\n>> +            ReorderBufferTupleBuf *newtuple; \n\n>> +        }            sequence; \n\n>>     }            data; \n\n>> \n\n>>     /* \n\n>\n\n> I don't think we should expose sequence changes via their tuples - \n\n> that'll unnecessarily expose a lot of implementation details. \n\n\n\nCan you elaborate more on this? Sequence writes its tuple data in WAL and triggers a change\n\nevent to logical decoding logic. What else can I use to expose a sequence change?\n\n\n\nBest\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n\n\n---- On Wed, 25 Mar 2020 12:27:28 -0700 Andres Freund <andres@anarazel.de> wrote ----\n\n\nHi, \n \nOn 2020-03-24 16:19:21 -0700, Cary Huang wrote: \n> I have shared a patch that allows sequence relation to be supported in \n> logical replication via the decoding plugin ( test_decoding for \n> example ); it does not support sequence relation in logical \n> replication between a PG publisher and a PG subscriber via pgoutput \n> plugin as it will require much more work. \n \nCould you expand on \"much more work\"? Once decoding support is there, \nthat shouldn't be that much? \n \n \n> Sequence changes caused by other sequence-related SQL functions like \n> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so \n> replicating changes caused by these should not be a problem. \n \nI think this really would need to handle at the very least setval to \nmake sense. \n \n \n> For the replication to make sense, the patch actually disables the WAL \n> update at every 32 nextval() calls, so every call to nextval() will \n> emit a WAL update for proper replication. This is done by setting \n> SEQ_LOG_VALS to 0 in sequence.c \n \nWhy is that needed? ISTM updating in increments of 32 is fine for \nreplication purposes? It's good imo, because sending out more granular \nincrements would increase the size of the WAL stream? \n \n \n \n> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c \n> index 93c948856e..7a7e572d6c 100644 \n> --- a/contrib/test_decoding/test_decoding.c \n> +++ b/contrib/test_decoding/test_decoding.c \n> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, \n>                                     &change->data.tp.oldtuple->tuple, \n>                                     true); \n>             break; \n> +        case REORDER_BUFFER_CHANGE_SEQUENCE: \n> +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); \n> +                    if (change->data.sequence.newtuple == NULL) \n> +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); \n> +                    else \n> +                        tuple_to_stringinfo(ctx->out, tupdesc, \n> +                                            &change->data.sequence.newtuple->tuple, \n> +                                            false); \n> +                    break; \n>         default: \n>             Assert(false); \n>     } \n \nYou should also add tests - the main purpose of contrib/test_decoding is \nto facilitate writing those... \n \n \n> +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); \n \nHuh, why are you doing this? That's going to increase overhead of logical \ndecoding by many times? \n \n \n> +                case REORDER_BUFFER_CHANGE_SEQUENCE: \n> +                    Assert(snapshot_now); \n> + \n> +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, \n> +                                                change->data.sequence.relnode.relNode); \n> + \n> +                    if (reloid == InvalidOid && \n> +                        change->data.sequence.newtuple == NULL) \n> +                        goto change_done; \n \nI don't think this path should be needed? There's afaict no valid ways \nwe should be able to end up here without a tuple? \n \n \n> +                    if (!RelationIsLogicallyLogged(relation)) \n> +                        goto change_done; \n \nSimilarly, this seems superflous and should perhaps be an assertion? \n \n> +                    /* user-triggered change */ \n> +                    if (!IsToastRelation(relation)) \n> +                    { \n> +                        ReorderBufferToastReplace(rb, txn, relation, change); \n> +                        rb->apply_change(rb, txn, relation, change); \n> +                    } \n> +                    break; \n>             } \n>         } \n> \n \nThis doesn't make sense either. \n \n \n \n> diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h \n> index 626ecf4dc9..cf3fd45c5f 100644 \n> --- a/src/include/replication/reorderbuffer.h \n> +++ b/src/include/replication/reorderbuffer.h \n> @@ -62,7 +62,8 @@ enum ReorderBufferChangeType \n>     REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID, \n>     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT, \n>     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, \n> -    REORDER_BUFFER_CHANGE_TRUNCATE \n> +    REORDER_BUFFER_CHANGE_TRUNCATE, \n> +    REORDER_BUFFER_CHANGE_SEQUENCE, \n> }; \n> \n> /* forward declaration */ \n> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange \n>             CommandId    cmax; \n>             CommandId    combocid; \n>         }            tuplecid; \n> +        /* \n> +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one \n> +         * set of relations to be truncated. \n> +         */ \n \nWhat? \n \n> +        struct \n> +        { \n> +            RelFileNode relnode; \n> +            ReorderBufferTupleBuf *newtuple; \n> +        }            sequence; \n>     }            data; \n> \n>     /* \n \nI don't think we should expose sequence changes via their tuples - \nthat'll unnecessarily expose a lot of implementation details. \n \nGreetings, \n \nAndres Freund\nHi Andresthanks for your reply and your patch review. Please see my comments below>On 2020-03-24 16:19:21 -0700, Cary Huang wrote: >> I have shared a patch that allows sequence relation to be supported in >> logical replication via the decoding plugin ( test_decoding for >> example ); it does not support sequence relation in logical >> replication between a PG publisher and a PG subscriber via pgoutput >> plugin as it will require much more work. >> Could you expand on \"much more work\"? Once decoding support is there, > that shouldn't be that much? By much more work, I meant more source files will need to be changed to have sequence replication supported between a PG subscriber and publisher using pgoutput plugin. About 10 more source file changes.Idea is similar though.>> Sequence changes caused by other sequence-related SQL functions like >> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so >> replicating changes caused by these should not be a problem. >> I think this really would need to handle at the very least setval to > make sense. yes, sure>> For the replication to make sense, the patch actually disables the WAL >> update at every 32 nextval() calls, so every call to nextval() will >> emit a WAL update for proper replication. This is done by setting >> SEQ_LOG_VALS to 0 in sequence.c >> Why is that needed? ISTM updating in increments of 32 is fine for > replication purposes? It's good imo, because sending out more granular > increments would increase the size of the WAL stream? yes, updating WAL at every 32 increment is good and have huge performance benefits according to Michael, but when it is replicated logically to subscribers, the sequence value they receive would not make much sense to them.For example, If i have a Sequence called \"seq\" with current value = 100 and increment = 5. The nextval('seq') call willreturn 105 to the client but it will write 260 to WAL record ( 100 + (5*32) ), because that is the value after 32increments and internally it is also maintaining a \"log_cnt\" counter that tracks how many nextval() calls have been invokedsince the last WAL write, so it could kind of derive backwards to find the proper next value to return to client. But the subscriber for this sequence will receive a change of 260 instead of 105, and it does not represent the currentsequence status. Subscriber is not able to derive backwards because it does not know the increment size in its schema.Setting SEQ_LOG_VALS to 0 in sequence.c basically disables this 32 increment behavior and makes WAL update at every nextval() calland therefore the subscriber to this sequence will receive the same value (105) as the client, as a cost of more WAL writes.I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need todisabling the 32 increment behavior?>> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c >> index 93c948856e..7a7e572d6c 100644 >> --- a/contrib/test_decoding/test_decoding.c >> +++ b/contrib/test_decoding/test_decoding.c >> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, >>                                     &change->data.tp.oldtuple->tuple, >>                                     true); >>             break; >> +        case REORDER_BUFFER_CHANGE_SEQUENCE: >> +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); >> +                    if (change->data.sequence.newtuple == NULL) >> +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); >> +                    else >> +                        tuple_to_stringinfo(ctx->out, tupdesc, >> +                                            &change->data.sequence.newtuple->tuple, >> +                                            false); >> +                    break; >>         default: >>             Assert(false); >>     } >> You should also add tests - the main purpose of contrib/test_decoding is > to facilitate writing those... thanks, I will add>> +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); >> Huh, why are you doing this? That's going to increase overhead of logical > decoding by many times? This is needed to allow snapshot to be built inside DecodeCommit() function. Regular changes caused by INSERT also has this function called so I assume it is needed to ensure proper decoding. Without this, a snapshot will not be built and the change transaction will not be logged>> +                case REORDER_BUFFER_CHANGE_SEQUENCE: >> +                    Assert(snapshot_now); >> + >> +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, >> +                                                change->data.sequence.relnode.relNode); >> + >> +                    if (reloid == InvalidOid && >> +                        change->data.sequence.newtuple == NULL) >> +                        goto change_done; >> I don't think this path should be needed? There's afaict no valid ways > we should be able to end up here without a tuple? yeah you are right, I can remove the tuple check>> +                    if (!RelationIsLogicallyLogged(relation)) >> +                        goto change_done; >> Similarly, this seems superflous and should perhaps be an assertion? I think it should be ok to check a relation like this, because it also will check the persistence of the relation and whetherwal_level is set to 'logical'. It is commonly used in the regular INSERT cases so I thought it would make sense to use itfor sequence.>> +                    /* user-triggered change */ >> +                    if (!IsToastRelation(relation)) >> +                    { >> +                        ReorderBufferToastReplace(rb, txn, relation, change); >> +                        rb->apply_change(rb, txn, relation, change); >> +                    } >> +                    break; >>             } >>         } >> >> This doesn't make sense either. agreed, it should not be here.>> /* forward declaration */ >> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange >>             CommandId    cmax; >>             CommandId    combocid; >>         }            tuplecid; >> +        /* >> +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one >> +         * set of relations to be truncated. >> +         */ >> What? Will fix the comment>> +        struct >> +        { >> +            RelFileNode relnode; >> +            ReorderBufferTupleBuf *newtuple; >> +        }            sequence; >>     }            data; >> >>     /* >> I don't think we should expose sequence changes via their tuples - > that'll unnecessarily expose a lot of implementation details. Can you elaborate more on this? Sequence writes its tuple data in WAL and triggers a changeevent to logical decoding logic. What else can I use to expose a sequence change?BestCary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Wed, 25 Mar 2020 12:27:28 -0700 Andres Freund <andres@anarazel.de> wrote ----Hi, On 2020-03-24 16:19:21 -0700, Cary Huang wrote: > I have shared a patch that allows sequence relation to be supported in > logical replication via the decoding plugin ( test_decoding for > example ); it does not support sequence relation in logical > replication between a PG publisher and a PG subscriber via pgoutput > plugin as it will require much more work. Could you expand on \"much more work\"? Once decoding support is there, that shouldn't be that much? > Sequence changes caused by other sequence-related SQL functions like > setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so > replicating changes caused by these should not be a problem. I think this really would need to handle at the very least setval to make sense. > For the replication to make sense, the patch actually disables the WAL > update at every 32 nextval() calls, so every call to nextval() will > emit a WAL update for proper replication. This is done by setting > SEQ_LOG_VALS to 0 in sequence.c Why is that needed? ISTM updating in increments of 32 is fine for replication purposes? It's good imo, because sending out more granular increments would increase the size of the WAL stream? > diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c > index 93c948856e..7a7e572d6c 100644 > --- a/contrib/test_decoding/test_decoding.c > +++ b/contrib/test_decoding/test_decoding.c > @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, >                                     &change->data.tp.oldtuple->tuple, >                                     true); >             break; > +        case REORDER_BUFFER_CHANGE_SEQUENCE: > +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); > +                    if (change->data.sequence.newtuple == NULL) > +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); > +                    else > +                        tuple_to_stringinfo(ctx->out, tupdesc, > +                                            &change->data.sequence.newtuple->tuple, > +                                            false); > +                    break; >         default: >             Assert(false); >     } You should also add tests - the main purpose of contrib/test_decoding is to facilitate writing those... > +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); Huh, why are you doing this? That's going to increase overhead of logical decoding by many times? > +                case REORDER_BUFFER_CHANGE_SEQUENCE: > +                    Assert(snapshot_now); > + > +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, > +                                                change->data.sequence.relnode.relNode); > + > +                    if (reloid == InvalidOid && > +                        change->data.sequence.newtuple == NULL) > +                        goto change_done; I don't think this path should be needed? There's afaict no valid ways we should be able to end up here without a tuple? > +                    if (!RelationIsLogicallyLogged(relation)) > +                        goto change_done; Similarly, this seems superflous and should perhaps be an assertion? > +                    /* user-triggered change */ > +                    if (!IsToastRelation(relation)) > +                    { > +                        ReorderBufferToastReplace(rb, txn, relation, change); > +                        rb->apply_change(rb, txn, relation, change); > +                    } > +                    break; >             } >         } > This doesn't make sense either. > diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h > index 626ecf4dc9..cf3fd45c5f 100644 > --- a/src/include/replication/reorderbuffer.h > +++ b/src/include/replication/reorderbuffer.h > @@ -62,7 +62,8 @@ enum ReorderBufferChangeType >     REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID, >     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT, >     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, > -    REORDER_BUFFER_CHANGE_TRUNCATE > +    REORDER_BUFFER_CHANGE_TRUNCATE, > +    REORDER_BUFFER_CHANGE_SEQUENCE, > }; > > /* forward declaration */ > @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange >             CommandId    cmax; >             CommandId    combocid; >         }            tuplecid; > +        /* > +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one > +         * set of relations to be truncated. > +         */ What? > +        struct > +        { > +            RelFileNode relnode; > +            ReorderBufferTupleBuf *newtuple; > +        }            sequence; >     }            data; > >     /* I don't think we should expose sequence changes via their tuples - that'll unnecessarily expose a lot of implementation details. Greetings, Andres Freund", "msg_date": "Thu, 26 Mar 2020 15:33:33 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi\n\n\n\nI would like to share a v2 of the sequence replication patch that allows logical replication of sequence relation via the decoding plugins at the moment. I have restored the original logic where sequence emits a WAL update every 32 increment, but instead of writing a future value to the WAL, it writes the current value, so the decoding plugin can receive the current sequence value. Regression tests for test_decoding have been updated to reflect the sequence changes and a new sequence test is added as well.\n\n\n\nBest\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Thu, 26 Mar 2020 15:33:33 -0700 Cary Huang <cary.huang@highgo.ca> wrote ----\n\n\n\nHi Andres\n\n\n\nthanks for your reply and your patch review. Please see my comments below\n\n\n\n>On 2020-03-24 16:19:21 -0700, Cary Huang wrote: \n\n>> I have shared a patch that allows sequence relation to be supported in \n\n>> logical replication via the decoding plugin ( test_decoding for \n\n>> example ); it does not support sequence relation in logical \n\n>> replication between a PG publisher and a PG subscriber via pgoutput \n\n>> plugin as it will require much more work. \n\n>\n\n> Could you expand on \"much more work\"? Once decoding support is there, \n\n> that shouldn't be that much? \n\n\n\nBy much more work, I meant more source files will need to be changed to have sequence replication \n\nsupported between a PG subscriber and publisher using pgoutput plugin. About 10 more source file changes.\n\nIdea is similar though.\n\n\n\n\n>> Sequence changes caused by other sequence-related SQL functions like \n\n>> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so \n\n>> replicating changes caused by these should not be a problem. \n\n>\n\n> I think this really would need to handle at the very least setval to \n\n> make sense. \n\n\n\nyes, sure\n\n\n\n>> For the replication to make sense, the patch actually disables the WAL \n\n>> update at every 32 nextval() calls, so every call to nextval() will \n\n>> emit a WAL update for proper replication. This is done by setting \n\n>> SEQ_LOG_VALS to 0 in sequence.c \n\n>\n\n> Why is that needed? ISTM updating in increments of 32 is fine for \n\n> replication purposes? It's good imo, because sending out more granular \n\n> increments would increase the size of the WAL stream? \n\n\n\nyes, updating WAL at every 32 increment is good and have huge performance benefits according \n\nto Michael, but when it is replicated logically to subscribers, the sequence value they receive would not \n\nmake much sense to them.\n\nFor example, \n\n\n\nIf i have a Sequence called \"seq\" with current value = 100 and increment = 5. The nextval('seq') call will\n\nreturn 105 to the client but it will write 260 to WAL record ( 100 + (5*32) ), because that is the value after 32\n\nincrements and internally it is also maintaining a \"log_cnt\" counter that tracks how many nextval() calls have been invoked\n\nsince the last WAL write, so it could kind of derive backwards to find the proper next value to return to client. \n\n\n\nBut the subscriber for this sequence will receive a change of 260 instead of 105, and it does not represent the current\n\nsequence status. Subscriber is not able to derive backwards because it does not know the increment size in its schema.\n\n\n\nSetting SEQ_LOG_VALS to 0 in sequence.c basically disables this 32 increment behavior and makes WAL update at every nextval() call\n\nand therefore the subscriber to this sequence will receive the same value (105) as the client, as a cost of more WAL writes.\n\n\n\nI would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n\ndisabling the 32 increment behavior?\n\n\n\n>> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c \n\n>> index 93c948856e..7a7e572d6c 100644 \n\n>> --- a/contrib/test_decoding/test_decoding.c \n\n>> +++ b/contrib/test_decoding/test_decoding.c \n\n>> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, \n\n>>                                     &change->data.tp.oldtuple->tuple, \n\n>>                                     true); \n\n>>             break; \n\n>> +        case REORDER_BUFFER_CHANGE_SEQUENCE: \n\n>> +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); \n\n>> +                    if (change->data.sequence.newtuple == NULL) \n\n>> +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); \n\n>> +                    else \n\n>> +                        tuple_to_stringinfo(ctx->out, tupdesc, \n\n>> +                                            &change->data.sequence.newtuple->tuple, \n\n>> +                                            false); \n\n>> +                    break; \n\n>>         default: \n\n>>             Assert(false); \n\n>>     } \n\n>\n\n> You should also add tests - the main purpose of contrib/test_decoding is \n\n> to facilitate writing those... \n\n\n\nthanks, I will add\n\n\n\n\n>> +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); \n\n>\n\n> Huh, why are you doing this? That's going to increase overhead of logical \n\n> decoding by many times? \n\n\n\nThis is needed to allow snapshot to be built inside DecodeCommit() function. Regular changes caused by INSERT also has this \n\nfunction called so I assume it is needed to ensure proper decoding. Without this, a snapshot will not be built and the change \n\ntransaction will not be logged\n\n\n\n\n>> +                case REORDER_BUFFER_CHANGE_SEQUENCE: \n\n>> +                    Assert(snapshot_now); \n\n>> + \n\n>> +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, \n\n>> +                                                change->data.sequence.relnode.relNode); \n\n>> + \n\n>> +                    if (reloid == InvalidOid && \n\n>> +                        change->data.sequence.newtuple == NULL) \n\n>> +                        goto change_done; \n\n>\n\n> I don't think this path should be needed? There's afaict no valid ways \n\n> we should be able to end up here without a tuple? \n\n\n\nyeah you are right, I can remove the tuple check\n\n\n\n>> +                    if (!RelationIsLogicallyLogged(relation)) \n\n>> +                        goto change_done; \n\n>\n\n> Similarly, this seems superflous and should perhaps be an assertion? \n\n\n\nI think it should be ok to check a relation like this, because it also will check the persistence of the relation and whether\n\nwal_level is set to 'logical'. It is commonly used in the regular INSERT cases so I thought it would make sense to use it\n\nfor sequence.\n\n\n\n>> +                    /* user-triggered change */ \n\n>> +                    if (!IsToastRelation(relation)) \n\n>> +                    { \n\n>> +                        ReorderBufferToastReplace(rb, txn, relation, change); \n\n>> +                        rb->apply_change(rb, txn, relation, change); \n\n>> +                    } \n\n>> +                    break; \n\n>>             } \n\n>>         } \n\n>> \n\n>\n\n> This doesn't make sense either. \n\n\n\nagreed, it should not be here.\n\n\n\n>> /* forward declaration */ \n\n>> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange \n\n>>             CommandId    cmax; \n\n>>             CommandId    combocid; \n\n>>         }            tuplecid; \n\n>> +        /* \n\n>> +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one \n\n>> +         * set of relations to be truncated. \n\n>> +         */ \n\n>\n\n> What? \n\n\n\nWill fix the comment\n\n\n\n\n>> +        struct \n\n>> +        { \n\n>> +            RelFileNode relnode; \n\n>> +            ReorderBufferTupleBuf *newtuple; \n\n>> +        }            sequence; \n\n>>     }            data; \n\n>> \n\n>>     /* \n\n>\n\n> I don't think we should expose sequence changes via their tuples - \n\n> that'll unnecessarily expose a lot of implementation details. \n\n\n\nCan you elaborate more on this? Sequence writes its tuple data in WAL and triggers a change\n\nevent to logical decoding logic. What else can I use to expose a sequence change?\n\n\n\nBest\n\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n\n\n\n\n\n\n\n---- On Wed, 25 Mar 2020 12:27:28 -0700 Andres Freund <mailto:andres@anarazel.de> wrote ----\n\n\n\n\n\n\n\n\n\n\n\n\nHi, \n \nOn 2020-03-24 16:19:21 -0700, Cary Huang wrote: \n> I have shared a patch that allows sequence relation to be supported in \n> logical replication via the decoding plugin ( test_decoding for \n> example ); it does not support sequence relation in logical \n> replication between a PG publisher and a PG subscriber via pgoutput \n> plugin as it will require much more work. \n \nCould you expand on \"much more work\"? Once decoding support is there, \nthat shouldn't be that much? \n \n \n> Sequence changes caused by other sequence-related SQL functions like \n> setval() or ALTER SEQUENCE xxx, will always emit a WAL update, so \n> replicating changes caused by these should not be a problem. \n \nI think this really would need to handle at the very least setval to \nmake sense. \n \n \n> For the replication to make sense, the patch actually disables the WAL \n> update at every 32 nextval() calls, so every call to nextval() will \n> emit a WAL update for proper replication. This is done by setting \n> SEQ_LOG_VALS to 0 in sequence.c \n \nWhy is that needed? ISTM updating in increments of 32 is fine for \nreplication purposes? It's good imo, because sending out more granular \nincrements would increase the size of the WAL stream? \n \n \n \n> diff --git a/contrib/test_decoding/test_decoding.c b/contrib/test_decoding/test_decoding.c \n> index 93c948856e..7a7e572d6c 100644 \n> --- a/contrib/test_decoding/test_decoding.c \n> +++ b/contrib/test_decoding/test_decoding.c \n> @@ -466,6 +466,15 @@ pg_decode_change(LogicalDecodingContext *ctx, ReorderBufferTXN *txn, \n>                                     &change->data.tp.oldtuple->tuple, \n>                                     true); \n>             break; \n> +        case REORDER_BUFFER_CHANGE_SEQUENCE: \n> +                    appendStringInfoString(ctx->out, \" SEQUENCE:\"); \n> +                    if (change->data.sequence.newtuple == NULL) \n> +                        appendStringInfoString(ctx->out, \" (no-tuple-data)\"); \n> +                    else \n> +                        tuple_to_stringinfo(ctx->out, tupdesc, \n> +                                            &change->data.sequence.newtuple->tuple, \n> +                                            false); \n> +                    break; \n>         default: \n>             Assert(false); \n>     } \n \nYou should also add tests - the main purpose of contrib/test_decoding is \nto facilitate writing those... \n \n \n> +    ReorderBufferXidSetCatalogChanges(ctx->reorder, XLogRecGetXid(buf->record), buf->origptr); \n \nHuh, why are you doing this? That's going to increase overhead of logical \ndecoding by many times? \n \n \n> +                case REORDER_BUFFER_CHANGE_SEQUENCE: \n> +                    Assert(snapshot_now); \n> + \n> +                    reloid = RelidByRelfilenode(change->data.sequence.relnode.spcNode, \n> +                                                change->data.sequence.relnode.relNode); \n> + \n> +                    if (reloid == InvalidOid && \n> +                        change->data.sequence.newtuple == NULL) \n> +                        goto change_done; \n \nI don't think this path should be needed? There's afaict no valid ways \nwe should be able to end up here without a tuple? \n \n \n> +                    if (!RelationIsLogicallyLogged(relation)) \n> +                        goto change_done; \n \nSimilarly, this seems superflous and should perhaps be an assertion? \n \n> +                    /* user-triggered change */ \n> +                    if (!IsToastRelation(relation)) \n> +                    { \n> +                        ReorderBufferToastReplace(rb, txn, relation, change); \n> +                        rb->apply_change(rb, txn, relation, change); \n> +                    } \n> +                    break; \n>             } \n>         } \n> \n \nThis doesn't make sense either. \n \n \n \n> diff --git a/src/include/replication/reorderbuffer.h b/src/include/replication/reorderbuffer.h \n> index 626ecf4dc9..cf3fd45c5f 100644 \n> --- a/src/include/replication/reorderbuffer.h \n> +++ b/src/include/replication/reorderbuffer.h \n> @@ -62,7 +62,8 @@ enum ReorderBufferChangeType \n>     REORDER_BUFFER_CHANGE_INTERNAL_TUPLECID, \n>     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_INSERT, \n>     REORDER_BUFFER_CHANGE_INTERNAL_SPEC_CONFIRM, \n> -    REORDER_BUFFER_CHANGE_TRUNCATE \n> +    REORDER_BUFFER_CHANGE_TRUNCATE, \n> +    REORDER_BUFFER_CHANGE_SEQUENCE, \n> }; \n> \n> /* forward declaration */ \n> @@ -149,6 +150,15 @@ typedef struct ReorderBufferChange \n>             CommandId    cmax; \n>             CommandId    combocid; \n>         }            tuplecid; \n> +        /* \n> +         * Truncate data for REORDER_BUFFER_CHANGE_SEQUENCE representing one \n> +         * set of relations to be truncated. \n> +         */ \n \nWhat? \n \n> +        struct \n> +        { \n> +            RelFileNode relnode; \n> +            ReorderBufferTupleBuf *newtuple; \n> +        }            sequence; \n>     }            data; \n> \n>     /* \n \nI don't think we should expose sequence changes via their tuples - \nthat'll unnecessarily expose a lot of implementation details. \n \nGreetings, \n \nAndres Freund", "msg_date": "Wed, 15 Apr 2020 16:27:02 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi,\n\nOn 2020-03-26 15:33:33 -0700, Cary Huang wrote:\n\n> >> For the replication to make sense, the patch actually disables the WAL \n> \n> >> update at every 32 nextval() calls, so every call to nextval() will \n> \n> >> emit a WAL update for proper replication. This is done by setting \n> \n> >> SEQ_LOG_VALS to 0 in sequence.c�\n> \n> >\n> \n> > Why is that needed? ISTM updating in increments of 32 is fine for \n> \n> > replication purposes? It's good imo, because sending out more granular \n> \n> > increments would increase the size of the WAL stream? \n> \n> \n> \n> yes, updating WAL at every 32 increment is good and have huge performance benefits according�\n> \n> to Michael, but when it is replicated logically to subscribers, the sequence value they receive would not \n> \n> make much sense to them.\n> \n> For example,�\n> \n> \n> \n> If i have a Sequence called \"seq\" with current value = 100 and increment = 5. The nextval('seq') call will\n> \n> return 105 to the client but it will write 260 to WAL record ( 100 + (5*32) ), because that is the value after 32\n> \n> increments and internally it is also maintaining a \"log_cnt\" counter that tracks how many nextval() calls have been invoked\n> \n> since the last WAL write, so it could kind of derive backwards to find the proper next value to return to client.�\n> \n> \n> \n> But the subscriber for this sequence will receive a change of 260 instead of 105, and it does not represent the current\n> \n> sequence status. Subscriber is not able to derive backwards because it does not know the increment size in its schema.\n\nWhat is the problem with the subscriber seeing 260? This already can\nhappen on the primary today, if there is a crash / immediate\nrestart. But that is fine - sequences don't guarantee that they are free\nof gaps, just that each call will return a bigger value than before.\n\n\n> \n> Setting SEQ_LOG_VALS to 0 in sequence.c basically disables this 32 increment behavior and makes WAL update at every nextval() call\n> \n> and therefore the subscriber to this sequence will receive the same value (105) as the client, as a cost of more WAL writes.\n> \n> \n> \n> I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n> \n> disabling the 32 increment behavior?\n\nIt simply shouldn't expect that to be the case. What do you need it\nfor?\n\nAs far as I can tell replicating sequence values is useful to allow\nfailover, by ensuring all sequences will return sensible values going\nforward. That does not require to now precise values.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:44:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "On Thu, 16 Apr 2020 at 07:44, Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > I would like to ask if you have some suggestions or ideas that can make\n> subscriber receives the current value without the need to\n> >\n> > disabling the 32 increment behavior?\n>\n> It simply shouldn't expect that to be the case. What do you need it\n> for?\n>\n> As far as I can tell replicating sequence values is useful to allow\n> failover, by ensuring all sequences will return sensible values going\n> forward. That does not require to now precise values.\n\n\nTotally agree. Code that relies on getting specific sequence values is\nbroken code. Alas, very common, but still broken.\n\nCary, by way of background a large part of why this wasn't supported by\nlogical decoding back in 9.4 is that until the pg_catalog.pg_sequence\nrelation was introduced in PostgreSQL 10, the sequence relfilenode\nintermixed a bunch of transactional and non-transactional state in a very\nmessy way. This made it very hard to achieve sensible behaviour for logical\ndecoding.\n\nAs it is, make sure your regression tests carefully cover the following\ncases, as TAP tests in src/test/recovery, probably a new module for logical\ndecoding of sequences:\n\n1.\n\n* Begin txn\n* Create sequence\n* Call nextval() on sequence over generate_series() and discard results\n* Rollback\n* Issue a dummy insert+commit to some other table to force logical decoding\nto send something\n* Ensure subscription catches up successfully\n\nThis checks that we cope with advances for a sequence that doesn't get\ncreated.\n\n2.\n\n* Begin 1st txn\n* Create a sequence\n* Use the sequence to populate a temp table with enough rows to ensure\nsequence updates are written\n* Begin a 2nd txn\n* Issue a dummy insert+commit to some other table to force logical decoding\nto send something\n* Commit the 2nd txn\n* Commit the 1st txn\n* Wait for subscription catchup\n* Check that the sequence value on the subscriber reflects the value after\nsequence advance, not the value at creation time\n\nThis makes sure that sequence advances are handled sensibly when they\narrive for a sequence that does not yet exist in the catalogs.\n\nYou'll need to run psql in an IPC::Run background session for that. We\nshould really have a helper for this. I'll see if I'm allowed to post the\none I use for my own TAP tests to the list.\n\nOn Thu, 16 Apr 2020 at 07:44, Andres Freund <andres@anarazel.de> wrote:\n> I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n> \n> disabling the 32 increment behavior?\n\nIt simply shouldn't expect that to be the case.  What do you need it\nfor?\n\nAs far as I can tell replicating sequence values is useful to allow\nfailover, by ensuring all sequences will return sensible values going\nforward. That does not require to now precise values.Totally agree. Code that relies on getting specific sequence values is broken code. Alas, very common, but still broken.Cary, by way of background a large part of why this wasn't supported by logical decoding back in 9.4 is that until the pg_catalog.pg_sequence relation was introduced in PostgreSQL 10, the sequence relfilenode intermixed a bunch of transactional and non-transactional state in a very messy way. This made it very hard to achieve sensible behaviour for logical decoding.As it is, make sure your regression tests carefully cover the following cases, as TAP tests in src/test/recovery, probably a new module for logical decoding of sequences:1.* Begin txn* Create sequence* Call nextval() on sequence over generate_series() and discard results* Rollback* Issue a dummy insert+commit to some other table to force logical decoding to send something* Ensure subscription catches up successfullyThis checks that we cope with advances for a sequence that doesn't get created.2. * Begin 1st txn* Create a sequence* Use the sequence to populate a temp table with enough rows to ensure sequence updates are written* Begin a 2nd txn* Issue a dummy insert+commit to some other table to force logical decoding to send something* Commit the 2nd txn* Commit the 1st txn* Wait for subscription catchup* Check that the sequence value on the subscriber reflects the value after sequence advance, not the value at creation timeThis makes sure that sequence advances are handled sensibly when they arrive for a sequence that does not yet exist in the catalogs.You'll need to run psql in an IPC::Run background session for that. We should really have a helper for this. I'll see if I'm allowed to post the one I use for my own TAP tests to the list.", "msg_date": "Thu, 16 Apr 2020 13:18:28 +0800", "msg_from": "Craig Ringer <craig@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi Craig, Andres\n\n\n\nThank you guys so much for your reviews and comments. Really helpful. Yes you guys are right, Sequence does not guarantee free of gaps and replicating sequence is useful for failover cases, then there will be no problem for a subscriber to get a future value 32 increments after. I will do more analysis on my end based on your comments and refine the patch with better test cases. Much appreciated of your help.\n\n\n\nBest regards\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Wed, 15 Apr 2020 22:18:28 -0700 Craig Ringer <craig@2ndquadrant.com> wrote ----\n\n\n\nOn Thu, 16 Apr 2020 at 07:44, Andres Freund <mailto:andres@anarazel.de> wrote:\n\n\n> I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n > \n > disabling the 32 increment behavior?\n \n It simply shouldn't expect that to be the case.  What do you need it\n for?\n \n As far as I can tell replicating sequence values is useful to allow\n failover, by ensuring all sequences will return sensible values going\n forward. That does not require to now precise values.\n\n\nTotally agree. Code that relies on getting specific sequence values is broken code. Alas, very common, but still broken.\n\n\n\nCary, by way of background a large part of why this wasn't supported by logical decoding back in 9.4 is that until the pg_catalog.pg_sequence relation was introduced in PostgreSQL 10, the sequence relfilenode intermixed a bunch of transactional and non-transactional state in a very messy way. This made it very hard to achieve sensible behaviour for logical decoding.\n\n\n\nAs it is, make sure your regression tests carefully cover the following cases, as TAP tests in src/test/recovery, probably a new module for logical decoding of sequences:\n\n\n\n1.\n\n\n\n* Begin txn\n\n* Create sequence\n\n* Call nextval() on sequence over generate_series() and discard results\n\n* Rollback\n\n* Issue a dummy insert+commit to some other table to force logical decoding to send something\n\n* Ensure subscription catches up successfully\n\n\n\nThis checks that we cope with advances for a sequence that doesn't get created.\n\n\n\n2.\n\n \n\n* Begin 1st txn\n\n* Create a sequence\n\n* Use the sequence to populate a temp table with enough rows to ensure sequence updates are written\n\n* Begin a 2nd txn\n\n* Issue a dummy insert+commit to some other table to force logical decoding to send something\n\n\n\n* Commit the 2nd txn\n\n* Commit the 1st txn\n\n* Wait for subscription catchup\n\n* Check that the sequence value on the subscriber reflects the value after sequence advance, not the value at creation time\n\n\n\nThis makes sure that sequence advances are handled sensibly when they arrive for a sequence that does not yet exist in the catalogs.\n\n\n\nYou'll need to run psql in an IPC::Run background session for that. We should really have a helper for this. I'll see if I'm allowed to post the one I use for my own TAP tests to the list.\nHi Craig, AndresThank you guys so much for your reviews and comments. Really helpful. Yes you guys are right, Sequence does not guarantee free of gaps and replicating sequence is useful for failover cases, then there will be no problem for a subscriber to get a future value 32 increments after. I will do more analysis on my end based on your comments and refine the patch with better test cases. Much appreciated of your help.Best regardsCary Huang-------------HighGo Software Inc. (Canada)cary.huang@highgo.cawww.highgo.ca---- On Wed, 15 Apr 2020 22:18:28 -0700 Craig Ringer <craig@2ndquadrant.com> wrote ----On Thu, 16 Apr 2020 at 07:44, Andres Freund <andres@anarazel.de> wrote:> I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to > > disabling the 32 increment behavior? It simply shouldn't expect that to be the case.  What do you need it for? As far as I can tell replicating sequence values is useful to allow failover, by ensuring all sequences will return sensible values going forward. That does not require to now precise values.Totally agree. Code that relies on getting specific sequence values is broken code. Alas, very common, but still broken.Cary, by way of background a large part of why this wasn't supported by logical decoding back in 9.4 is that until the pg_catalog.pg_sequence relation was introduced in PostgreSQL 10, the sequence relfilenode intermixed a bunch of transactional and non-transactional state in a very messy way. This made it very hard to achieve sensible behaviour for logical decoding.As it is, make sure your regression tests carefully cover the following cases, as TAP tests in src/test/recovery, probably a new module for logical decoding of sequences:1.* Begin txn* Create sequence* Call nextval() on sequence over generate_series() and discard results* Rollback* Issue a dummy insert+commit to some other table to force logical decoding to send something* Ensure subscription catches up successfullyThis checks that we cope with advances for a sequence that doesn't get created.2. * Begin 1st txn* Create a sequence* Use the sequence to populate a temp table with enough rows to ensure sequence updates are written* Begin a 2nd txn* Issue a dummy insert+commit to some other table to force logical decoding to send something* Commit the 2nd txn* Commit the 1st txn* Wait for subscription catchup* Check that the sequence value on the subscriber reflects the value after sequence advance, not the value at creation timeThis makes sure that sequence advances are handled sensibly when they arrive for a sequence that does not yet exist in the catalogs.You'll need to run psql in an IPC::Run background session for that. We should really have a helper for this. I'll see if I'm allowed to post the one I use for my own TAP tests to the list.", "msg_date": "Thu, 16 Apr 2020 09:45:06 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi Craig\n\n\n\nI have added more regression test cases to the sequence replication patch with emphasis on transactions and rollback per your suggestions. I find that when a transaction is aborted with rollback, the decoder plugin will not receive the change but the sequence value will in fact advance if nextval() or setval() were called. I have also made sequence replication an optional parameter in test_decoding so other test_decoding regression test cases will not need an update due to the new sequence replication function. The sequence update in this patch will emit an wal update every 32 increment, and each update is a future value 32 increments after like it was originally, so it is no longer required getting a specific value of sequence.\n\n\nCould you elaborate more on your second case where it requires a psql in an IPC::Run background session and check if regression needs more coverage on certain cases?\nthank you!\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n---- On Thu, 16 Apr 2020 09:45:06 -0700 Cary Huang <mailto:cary.huang@highgo.ca> wrote ----\n\n\nHi Craig, Andres\n\n\n\nThank you guys so much for your reviews and comments. Really helpful. Yes you guys are right, Sequence does not guarantee free of gaps and replicating sequence is useful for failover cases, then there will be no problem for a subscriber to get a future value 32 increments after. I will do more analysis on my end based on your comments and refine the patch with better test cases. Much appreciated of your help.\n\n\n\nBest regards\n\n\n\nCary Huang\n\n-------------\n\nHighGo Software Inc. (Canada)\n\nmailto:cary.huang@highgo.ca\n\nhttp://www.highgo.ca\n\n\n\n\n---- On Wed, 15 Apr 2020 22:18:28 -0700 Craig Ringer <mailto:craig@2ndquadrant.com> wrote ----\n\n\n\n\n\n\n\n\n\n\n\nOn Thu, 16 Apr 2020 at 07:44, Andres Freund <mailto:andres@anarazel.de> wrote:\n\n\n> I would like to ask if you have some suggestions or ideas that can make subscriber receives the current value without the need to\n > \n > disabling the 32 increment behavior?\n \n It simply shouldn't expect that to be the case.  What do you need it\n for?\n \n As far as I can tell replicating sequence values is useful to allow\n failover, by ensuring all sequences will return sensible values going\n forward. That does not require to now precise values.\n\n\nTotally agree. Code that relies on getting specific sequence values is broken code. Alas, very common, but still broken.\n\n\n\nCary, by way of background a large part of why this wasn't supported by logical decoding back in 9.4 is that until the pg_catalog.pg_sequence relation was introduced in PostgreSQL 10, the sequence relfilenode intermixed a bunch of transactional and non-transactional state in a very messy way. This made it very hard to achieve sensible behaviour for logical decoding.\n\n\n\nAs it is, make sure your regression tests carefully cover the following cases, as TAP tests in src/test/recovery, probably a new module for logical decoding of sequences:\n\n\n\n1.\n\n\n\n* Begin txn\n\n* Create sequence\n\n* Call nextval() on sequence over generate_series() and discard results\n\n* Rollback\n\n* Issue a dummy insert+commit to some other table to force logical decoding to send something\n\n* Ensure subscription catches up successfully\n\n\n\nThis checks that we cope with advances for a sequence that doesn't get created.\n\n\n\n2.\n\n \n\n* Begin 1st txn\n\n* Create a sequence\n\n* Use the sequence to populate a temp table with enough rows to ensure sequence updates are written\n\n* Begin a 2nd txn\n\n* Issue a dummy insert+commit to some other table to force logical decoding to send something\n\n\n\n* Commit the 2nd txn\n\n* Commit the 1st txn\n\n* Wait for subscription catchup\n\n* Check that the sequence value on the subscriber reflects the value after sequence advance, not the value at creation time\n\n\n\nThis makes sure that sequence advances are handled sensibly when they arrive for a sequence that does not yet exist in the catalogs.\n\n\n\nYou'll need to run psql in an IPC::Run background session for that. We should really have a helper for this. I'll see if I'm allowed to post the one I use for my own TAP tests to the list.", "msg_date": "Fri, 08 May 2020 16:32:38 -0700", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": true, "msg_subject": "Re: Include sequence relation support in logical replication" }, { "msg_contents": "Hi,\n\nOn 2020-05-08 16:32:38 -0700, Cary Huang wrote:\n> I have added more regression test cases to the sequence replication\n> patch with emphasis on transactions and rollback per your\n> suggestions. I find that when a transaction is aborted with rollback,\n> the decoder plugin will not receive the change but the sequence value\n> will in fact advance if nextval() or setval() were called.\n\nRight. The sequence advances shouldn't be treated\ntransactionally. That's already (optionally) done similarly for\nmessages, so you should be able to copy that code.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 8 May 2020 17:26:49 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Include sequence relation support in logical replication" } ]
[ { "msg_contents": "Attached is a small patch that introduces a simple function,\nAllocSetEstimateChunkSpace(), and uses it to improve the estimate for\nthe size of a hash entry for hash aggregation.\n\nGetting reasonable estimates for the hash entry size (including the\nTupleHashEntryData, the group key tuple, the per-group state, and by-\nref transition values) is important for multiple reasons:\n\n* It helps the planner know when hash aggregation is likely to spill,\nand how to cost it.\n\n* It helps hash aggregation regulate itself by setting a group limit\n(separate from the memory limit) to account for growing by-ref\ntransition values.\n\n* It helps choose a good initial size for the hash table. Too large of\na hash table will crowd out space that could be used for the group keys\nor the per-group state.\n\nEach group requires up to three palloc chunks: one for the group key\ntuple, one for the per-group states, and one for a by-ref transition\nvalue. Each chunk can have a lot of overhead: in addition to the chunk\nheader (16 bytes overhead), it also rounds the value up to a power of\ntwo (~25% overhead). So, it's important to account for this chunk\noverhead.\n\nI considered making it a method of a memory context, but the planner\nwill call this function before the hash agg memory context is created.\nIt seems to make more sense for this to just be an AllocSet-specific\nfunction for now.\n\nRegards,\n\tJeff Davis", "msg_date": "Tue, 24 Mar 2020 18:12:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "AllocSetEstimateChunkSpace()" }, { "msg_contents": "On Tue, 2020-03-24 at 18:12 -0700, Jeff Davis wrote:\n> I considered making it a method of a memory context, but the planner\n> will call this function before the hash agg memory context is\n> created.\n\nI implemented this approach also; attached.\n\nIt works a little better by having access to the memory context, so it\nknows set->allocChunkLimit. It also allows it to work with the slab\nallocator, which needs the memory context to return a useful number.\nHowever, this introduces more code and awkwardly needs to use\nCurrentMemoryContext when called from the planner (because the actual\nmemory context we're try to estimate for doesn't exist yet).\n\nI slightly favor the previous approach (mentioned in the parent email) \nbecause it's simple and effective. But I'm fine with this one if\nsomeone thinks it will be better for other use cases.\n\nRegards,\n\tJeff Davis", "msg_date": "Wed, 25 Mar 2020 11:46:31 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: AllocSetEstimateChunkSpace()" }, { "msg_contents": "Hi,\n\nOn 2020-03-24 18:12:03 -0700, Jeff Davis wrote:\n> Attached is a small patch that introduces a simple function,\n> AllocSetEstimateChunkSpace(), and uses it to improve the estimate for\n> the size of a hash entry for hash aggregation.\n> \n> Getting reasonable estimates for the hash entry size (including the\n> TupleHashEntryData, the group key tuple, the per-group state, and by-\n> ref transition values) is important for multiple reasons:\n> \n> * It helps the planner know when hash aggregation is likely to spill,\n> and how to cost it.\n> \n> * It helps hash aggregation regulate itself by setting a group limit\n> (separate from the memory limit) to account for growing by-ref\n> transition values.\n> \n> * It helps choose a good initial size for the hash table. Too large of\n> a hash table will crowd out space that could be used for the group keys\n> or the per-group state.\n> \n> Each group requires up to three palloc chunks: one for the group key\n> tuple, one for the per-group states, and one for a by-ref transition\n> value. Each chunk can have a lot of overhead: in addition to the chunk\n> header (16 bytes overhead), it also rounds the value up to a power of\n> two (~25% overhead). So, it's important to account for this chunk\n> overhead.\n\nAs mention on im/call: I think this is mainly an argument for combining\nthe group tuple & per-group state allocations. I'm kind of fine with\nafterwards taking the allocator overhead into account.\n\n\nI still don't buy that its useful to estimate the by-ref transition\nvalue overhead. We just don't have anything even have close enough to a\nmeaningful value to base this on. Even if we want to consider the\ninitial transition value or something, we'd be better off initially\nover-estimating the size of the transition state by a lot more than 25%\n(I am thinking more like 4x or so, with a minumum of 128 bytes or\nso). Since this is about the initial size of the hashtable, we're better\noff with a too small table, imo. A \"too large\" table is more likely to\nend up needing to spill when filled to only a small degree.\n\n\nI am kind of wondering if there's actually much point in trying to be\naccurate here at all. Especially when doing this from the\nplanner. Since, for a large fraction of aggregates, we're going to very\nroughly guess at transition space anyway, it seems like a bunch of\n\"overhead factors\" could turn out to be better than trying to be\naccurate on some parts, while still widely guessing at transition space.\nBut I'm not sure.\n\n\n> I considered making it a method of a memory context, but the planner\n> will call this function before the hash agg memory context is created.\n> It seems to make more sense for this to just be an AllocSet-specific\n> function for now.\n\n-1 to this approach. I think it's architecturally the wrong direction to\nadd more direct calls to functions within specific contexts.\n\n\n\nOn 2020-03-25 11:46:31 -0700, Jeff Davis wrote:\n> On Tue, 2020-03-24 at 18:12 -0700, Jeff Davis wrote:\n> > I considered making it a method of a memory context, but the planner\n> > will call this function before the hash agg memory context is\n> > created.\n> \n> I implemented this approach also; attached.\n> \n> It works a little better by having access to the memory context, so it\n> knows set->allocChunkLimit. It also allows it to work with the slab\n> allocator, which needs the memory context to return a useful number.\n> However, this introduces more code and awkwardly needs to use\n> CurrentMemoryContext when called from the planner (because the actual\n> memory context we're try to estimate for doesn't exist yet).\n\nYea, the \"context needs to exist\" part sucks. I really don't want to add\ncalls directly into AllocSet from more places though. And just ignoring\nthe parameters to the context seems wrong too.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 12:42:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: AllocSetEstimateChunkSpace()" }, { "msg_contents": "On Wed, 2020-03-25 at 12:42 -0700, Andres Freund wrote:\n> As mention on im/call: I think this is mainly an argument for\n> combining\n> the group tuple & per-group state allocations. I'm kind of fine with\n> afterwards taking the allocator overhead into account.\n\nThe overhead comes from two places: (a) the chunk header; and (b) the\nround-up-to-nearest-power-of-two behavior.\n\nCombining the group tuple and per-group states only saves the overhead\nfrom (a); it does nothing to help (b), which is often bigger. And it\nonly saves that overhead when there *is* a per-group state (i.e. not\nfor a DISTINCT query).\n\n> I still don't buy that its useful to estimate the by-ref transition\n> value overhead. We just don't have anything even have close enough to\n> a\n> meaningful value to base this on. \n\nBy-ref transition values aren't a primary motivation for me. I'm fine\nleaving that discussion separate if that's a sticking point. But if we\ndo have a way to measure chunk overhead, I don't really see a reason\nnot to use it for by-ref as well.\n\n> -1 to [AllocSet-specific] approach. I think it's architecturally the\n> wrong direction to\n> add more direct calls to functions within specific contexts.\n\nOK.\n\n> Yea, the \"context needs to exist\" part sucks. I really don't want to\n> add\n> calls directly into AllocSet from more places though. And just\n> ignoring\n> the parameters to the context seems wrong too.\n\nSo do you generally favor the EstimateMemoryChunkSpace() patch (that\nworks for all context types)? Or do you have another suggestion? Or do\nyou think we should just do nothing?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:43:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: AllocSetEstimateChunkSpace()" }, { "msg_contents": "Hi,\n\nOn 2020-03-25 14:43:43 -0700, Jeff Davis wrote:\n> On Wed, 2020-03-25 at 12:42 -0700, Andres Freund wrote:\n> > As mention on im/call: I think this is mainly an argument for\n> > combining\n> > the group tuple & per-group state allocations. I'm kind of fine with\n> > afterwards taking the allocator overhead into account.\n> \n> The overhead comes from two places: (a) the chunk header; and (b) the\n> round-up-to-nearest-power-of-two behavior.\n> \n> Combining the group tuple and per-group states only saves the overhead\n> from (a); it does nothing to help (b), which is often bigger.\n\nHm? It very well can help with b), since the round-up only happens once\nnow? I guess you could argue that it's possible that afterwards we'd\nmore likely to end in a bigger size class, and thus have roughly the\nsame amount of waste due rounding? But I don't think that's all that\nconvincing.\n\nI still, as I mentioned on the call, suspect that the right thing here\nis to use an allocation strategy that suffers from neither a nor b (for\ntuple and pergroup) and that has faster allocations too. That then also\nwould have the consequence that we don't need to care about per-alloc\noverhead anymore (be it a or b).\n\n\n> And it only saves that overhead when there *is* a per-group state\n> (i.e. not for a DISTINCT query).\n\nSo?\n\n\n> > I still don't buy that its useful to estimate the by-ref transition\n> > value overhead. We just don't have anything even have close enough to\n> > a\n> > meaningful value to base this on. \n> \n> By-ref transition values aren't a primary motivation for me. I'm fine\n> leaving that discussion separate if that's a sticking point. But if we\n> do have a way to measure chunk overhead, I don't really see a reason\n> not to use it for by-ref as well.\n\nWell, my point is that it's pretty much pointless for by-ref types. The\nsize estimates, if they exist, are so inaccurate that we don't gain\nanything by including it. As I said before, I think we'd be better off\ninitially assuming a higher transition space estimate.\n\n\n> > Yea, the \"context needs to exist\" part sucks. I really don't want to\n> > add\n> > calls directly into AllocSet from more places though. And just\n> > ignoring\n> > the parameters to the context seems wrong too.\n> \n> So do you generally favor the EstimateMemoryChunkSpace() patch (that\n> works for all context types)? Or do you have another suggestion? Or do\n> you think we should just do nothing?\n\nI think I'm increasingly leaning towards either using a constant\noverhead factor, or just getting rid of all memory context\noverhead. There's clearly no obviously correct design for the \"chunk\nsize\" functions, and not having overhead is better than ~correctly\nestimating it.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 25 Mar 2020 15:09:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: AllocSetEstimateChunkSpace()" }, { "msg_contents": "On Wed, 2020-03-25 at 15:09 -0700, Andres Freund wrote:\n> > The overhead comes from two places: (a) the chunk header; and (b)\n> > the\n> > round-up-to-nearest-power-of-two behavior.\n> > \n> > Combining the group tuple and per-group states only saves the\n> > overhead\n> > from (a); it does nothing to help (b), which is often bigger.\n> \n> Hm? It very well can help with b), since the round-up only happens\n> once\n> now? I guess you could argue that it's possible that afterwards we'd\n> more likely to end in a bigger size class, and thus have roughly the\n> same amount of waste due rounding? But I don't think that's all that\n> convincing.\n\nWhy is that not convincing? Each size class is double the previous one,\nso piling double the memory into a single allocation doesn't help at\nall. Two palloc(20)s turn into two 32-byte chunks; one palloc(40) turns\ninto a 64-byte chunk.\n\nYou might get lucky and the second chunk will fit in the wasted space\nfrom the first chunk; but when it does cross a boundary, it will be a\nbigger boundary and wipe out any efficiencies that you gained\npreviously.\n\nOf course it depends on the exact distribution. But I don't see any\nreason why we'd expect a distribution that would be favorable to\ncombining chunks together (except to avoid the chunk header, problem\n(a)).\n\n> I still, as I mentioned on the call, suspect that the right thing\n> here\n> is to use an allocation strategy that suffers from neither a nor b\n> (for\n> tuple and pergroup) and that has faster allocations too. That then\n> also\n> would have the consequence that we don't need to care about per-alloc\n> overhead anymore (be it a or b).\n\nIt might make sense for the next release but I'm wary of more churn in\nnodeAgg.c at this stage. It's not a trivial change because the\ndifferent allocations happen in different places and combining them\nwould be tricky.\n\n> > So do you generally favor the EstimateMemoryChunkSpace() patch\n> > (that\n> > works for all context types)? Or do you have another suggestion? Or\n> > do\n> > you think we should just do nothing?\n> \n> I think I'm increasingly leaning towards either using a constant\n> overhead factor, or just getting rid of all memory context\n> overhead. There's clearly no obviously correct design for the \"chunk\n> size\" functions, and not having overhead is better than ~correctly\n> estimating it.\n\nTrying to actually eliminate the overhead sounds like v14 to me.\n\nI believe the formula for AllocSet overhead can be approximated with:\n 16 + size/4\n\nThat would probably be better than a constant but a little hacky. I can\ndo that as a spot fix if this patch proves unpopular.\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Wed, 25 Mar 2020 17:16:05 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: AllocSetEstimateChunkSpace()" } ]
[ { "msg_contents": "Hi pghackers,\r\nThis is my first time posting here ... Gilles Darold and I are developing a new FDW which is based on the contrib/postgres_fdw. The postgres_fdw logic uses a RegisterXactCallback function to send local transactions remote. But I found that a registered XactCallback is not always called for a successful client ROLLBACK statement. So, a successful local ROLLBACK does not get sent remote by FDW, and now the local and remote transaction states are messed up, out of sync. The local database is \"eating\" the successful rollback.\r\n\r\nI attach a git format-patch file 0001-Fix-CommitTransactionCommand-to-CallXactCallbacks-in.patch\r\nThe patch fixes the problem and is ready to commit as far as I can tell. It adds some comment lines and three lines of code to CommitTransactionCommand() in the TBLOCK_ABORT_END case. Line (1) to reset the transaction's blockState back to TBLOCK_ABORT, ahead of (2) a new call to callXactCallbacks(). If the callback returns successful (which is usually the case), (3) the new code switches back to TBLOCK_ABORT_END, then continues with old code to CleanupTransaction() as before. If any callback does error out, the TBLOCK_ABORT was the correct block state for an error.\r\n\r\nI have not added a regression test since my scenario involves a C module ... I didn't see such a regression test, but somebody can teach me where to put the C module and Makefile if a regression test should be added. Heads up that the scenario to hit this is a bit complex (to me) ... I attach the following unit test files:\r\n1) eat_rollback.c, _PG_init() installs my_utility_hook() for INFO log for debugging.\r\nRegisterSubXactCallback(mySubtransactionCallback) which injects some logging and an ERROR for savepoints, i.e., negative testing, e.g., like a remote FDW savepoint failed.\r\nAnd RegisterXactTransaction(myTransactionCallback) with info logging.\r\n2) Makefile, to make the eat_rollback.so\r\n3) eat_rollback2.sql, drives the fail sequence, especially the SAVEPOINT, which errors out, then later a successful ROLLBACK gets incorrectly eaten (no CALLBACK info logging, demonstrates the bug), then another successful ROLLBACK (now there is CALLBACK info logging).\r\n4) eat_rollback2.out, output without the fix, the rollback at line 68 is successful but there is not myTransactionCallback() INFO output\r\n5) eat_rollback2.fixed, output after the fix, the rollback at line 68 is successful, and now there is a myTransactionCallback() INFO log. Success!\r\n\r\nIt first failed for me in v11.1, I suspect it failed since before then too. And it is still failing in current master.\r\n\r\nBye the way, we worked around the bug in our FDW by handling sub/xact in the utility hook. A transaction callback is still needed for implicit, internal rollbacks. Another advantage of the workaround is (I think) it reduces the code complexity and improves performance because the entire subxact stack is not unwound to drive each and every subtransaction level to remote. Also sub/transaction statements are sent remote as they arrive (local and remote are more \"transactionally\" synced, not stacked by FDW for later). And of course, the workaround doesn't hit the above bug, since our utility hook correctly handles the client ROLLBACK. If it makes sense to the community, I could try and patch contrib/postgres_fdw to do what we are doing. But note that I don't need it myself: we have our own new FDW for remote DB2 z/OS (!) ... at LzLabs we are a building a software defined mainframe with PostgreSQL (of all things).\r\n\r\nHope it helps!\r\n\r\nDave Sharpe\r\nI don't necessarily agree with everything I say. (MM)\r\nwww.lzlabs.com", "msg_date": "Wed, 25 Mar 2020 02:49:55 +0000", "msg_from": "Dave Sharpe <dave.sharpe@lzlabs.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix CommitTransactionCommand() to CallXactCallbacks() in\n TBLOCK_ABORT_END" }, { "msg_contents": "Le 25/03/2020 à 03:49, Dave Sharpe a écrit :\n>\n> Hi pghackers,\n>\n> This is my first time posting here ...  Gilles Darold and I are\n> developing a new FDW which is based on the contrib/postgres_fdw. The\n> postgres_fdw logic uses a RegisterXactCallback function to send local\n> transactions remote. But I found that a registered XactCallback is not\n> always called for a successful client ROLLBACK statement. So, a\n> successful local ROLLBACK does not get sent remote by FDW, and now the\n> local and remote transaction states are messed up, out of sync. The\n> local database is \"eating\" the successful rollback.\n>\n>  \n>\n> I attach a git format-patch file\n> 0001-Fix-CommitTransactionCommand-to-CallXactCallbacks-in.patch\n>\n> The patch fixes the problem and is ready to commit as far as I can\n> tell. It adds some comment lines and three lines of code to\n> CommitTransactionCommand() in the TBLOCK_ABORT_END case. Line (1) to\n> reset the transaction's blockState back to TBLOCK_ABORT, ahead of (2)\n> a new call to callXactCallbacks(). If the callback returns successful\n> (which is usually the case), (3) the new code switches back to\n> TBLOCK_ABORT_END, then continues with old code to CleanupTransaction()\n> as before. If any callback does error out, the TBLOCK_ABORT was the\n> correct block state for an error.\n>\n>  \n>\n> I have not added a regression test since my scenario involves a C\n> module ... I didn't see such a regression test, but somebody can teach\n> me where to put the C module and Makefile if a regression test should\n> be added. Heads up that the scenario to hit this is a bit complex (to\n> me) ... I attach the following unit test files:\n>\n> 1) eat_rollback.c, _/PG_init() installs my/_utility_hook() for INFO\n> log for debugging.\n>\n> RegisterSubXactCallback(mySubtransactionCallback) which injects some\n> logging and an ERROR for savepoints, i.e., negative testing, e.g.,\n> like a remote FDW savepoint failed.\n>\n> And RegisterXactTransaction(myTransactionCallback) with info logging.\n>\n> 2) Makefile, to make the eat_rollback.so\n>\n> 3) eat_rollback2.sql, drives the fail sequence, especially the\n> SAVEPOINT, which errors out, then later a successful ROLLBACK gets\n> incorrectly eaten (no CALLBACK info logging, demonstrates the bug),\n> then another successful ROLLBACK (now there is CALLBACK info logging).\n>\n> 4) eat_rollback2.out, output without the fix, the rollback at line 68\n> is successful but there is not myTransactionCallback() INFO output\n>\n> 5) eat_rollback2.fixed, output after the fix, the rollback at line 68\n> is successful, and now there is a myTransactionCallback() INFO log.\n> Success!\n>\n>  \n>\n> It first failed for me in v11.1, I suspect it failed since before then\n> too. And it is still failing in current master.\n>\n>  \n>\n> Bye the way, we worked around the bug in our FDW by handling sub/xact\n> in the utility hook. A transaction callback is still needed for\n> implicit, internal rollbacks. Another advantage of the workaround is\n> (I think) it reduces the code complexity and improves performance\n> because the entire subxact stack is not unwound to drive each and\n> every subtransaction level to remote. Also sub/transaction statements\n> are sent remote as they arrive (local and remote are more\n> \"transactionally\" synced, not stacked by FDW for later). And of\n> course, the workaround doesn't hit the above bug, since our utility\n> hook correctly handles the client ROLLBACK. If it makes sense to the\n> community, I could try and patch contrib/postgres_fdw to do what we\n> are doing. But note that I don't need it myself: we have our own new\n> FDW for remote DB2 z/OS (!) ... at LzLabs we are a building a software\n> defined mainframe with PostgreSQL (of all things).\n>\n>  \n>\n> Hope it helps!\n>\n>  \n>\n> Dave Sharpe\n>\n> /I don't necessarily agree with everything I say./(MM)\n>\n> www.lzlabs.com\n>\n\nHi,\n\n\nAs I'm involved in this thread I have given a review to this bug report\nand I don't think that there is a problem here but as a disclaimer my\nknowledge on internal transaction management is probably not enough to\nhave a definitive opinion.\n\n\nActually the callback function is called when the error is thrown:\n\n psql:eat_rollback2.sql:20: INFO:  00000: myTransactionCallback()\n XactEvent 2 (is abort) level 1 <-------------------\n LOCATION:  myTransactionCallback, eat_rollback.c:52\n psql:eat_rollback2.sql:20: ERROR:  XX000: no no no\n LOCATION:  mySubtransactionCallback, eat_rollback.c:65\n\n\nthis is probably why the callback is not called on the subsequent\nROLLBACK execution because abort processing is already done\n(src/backend/access/transam/xact.c:3890).\n\n\nWith this simple test and the debug extension:\n\n BEGIN;\n DROP INDEX \"index2\", \"index3\"; -- will fail indexes doesn't exist\n ROLLBACK;\n\n\nthe callback function is also called at error level, not on the ROLLBACK:\n\n ---------------------------------------------------------------------------------BEGIN\n psql:eat_rollback-2/sql/eat_rollback3.sql:39: INFO:  00000:\n my_utility_hook() utility transaction kind 0 (is not rollback)\n LOCATION:  my_utility_hook, eat_rollback.c:28\n BEGIN\n ---------------------------------------------------------------------------------DROP\n INDEX + error\n psql:eat_rollback-2/sql/eat_rollback3.sql:41: INFO:  00000:\n myTransactionCallback() XactEvent 2 (is abort) level 1 <---------------\n LOCATION:  myTransactionCallback, eat_rollback.c:52\n psql:eat_rollback-2/sql/eat_rollback3.sql:41: ERROR:  42704: index\n \"concur_index2\" does not exist\n LOCATION:  DropErrorMsgNonExistent, tablecmds.c:1209\n ---------------------------------------------------------------------------------ROLLBACK\n psql:eat_rollback-2/sql/eat_rollback3.sql:43: INFO:  00000:\n my_utility_hook() utility transaction kind 3 (is rollback)\n LOCATION:  my_utility_hook, eat_rollback.c:28\n ROLLBACK\n\n\nSo this is probably the expected behavior , however the proposed patch\nwill throw the callback function two times which might not be what we want.\n\n\n ---------------------------------------------------------------------------------BEGIN\n psql:eat_rollback-2/sql/eat_rollback3.sql:39: INFO:  00000:\n my_utility_hook() utility transaction kind 0 (is not rollback)\n LOCATION:  my_utility_hook, eat_rollback.c:28\n BEGIN\n ---------------------------------------------------------------------------------DROP\n INDEX + error\n psql:eat_rollback-2/sql/eat_rollback3.sql:41: INFO:  00000:\n myTransactionCallback() XactEvent 2 (is abort) level 1\n <-------------- first call\n LOCATION:  myTransactionCallback, eat_rollback.c:52\n psql:eat_rollback-2/sql/eat_rollback3.sql:41: ERROR:  42704: index\n \"concur_index2\" does not exist\n LOCATION:  DropErrorMsgNonExistent, tablecmds.c:1209\n ---------------------------------------------------------------------------------ROLLBACK\n psql:eat_rollback-2/sql/eat_rollback3.sql:43: INFO:  00000:\n my_utility_hook() utility transaction kind 3 (is\n rollback)<-------------- second call\n LOCATION:  my_utility_hook, eat_rollback.c:28\n psql:eat_rollback-2/sql/eat_rollback3.sql:43: INFO:  00000:\n myTransactionCallback() XactEvent 2 (is abort) level 1\n LOCATION:  myTransactionCallback, eat_rollback.c:52\n ROLLBACK\n\n\nBut again I may be wrong.\n\n\n-- \nGilles Darold\nhttp://www.darold.net/\n\n\n\n\n\n\n\nLe 25/03/2020 à 03:49, Dave Sharpe a\n écrit :\n\n\n\n\n\n\nHi\n pghackers,\nThis\n is my first time posting here ...\n  Gilles\n Darold and I are developing a new FDW which is based on the\n contrib/postgres_fdw. The postgres_fdw logic uses a\n RegisterXactCallback function to send local transactions\n remote. But I found that a registered XactCallback is not\n always called for a successful client ROLLBACK statement.\n So, a successful local ROLLBACK does not get sent remote by\n FDW, and now the local and remote transaction states are\n messed up, out of sync. The local database is \"eating\" the\n successful rollback.\n \nI attach a git format-patch file\n 0001-Fix-CommitTransactionCommand-to-CallXactCallbacks-in.patch\nThe patch fixes the problem and is ready to\n commit as far as I can tell. It adds some comment lines and\n three lines of code to CommitTransactionCommand() in the\n TBLOCK_ABORT_END case. Line (1) to reset the transaction's\n blockState back to TBLOCK_ABORT, ahead of (2) a new call to\n callXactCallbacks(). If the callback returns successful\n (which is usually the case), (3) the new code switches back\n to TBLOCK_ABORT_END, then continues with old code to\n CleanupTransaction() as before. If any callback does error\n out, the TBLOCK_ABORT was the correct block state for an\n error.\n \nI have not added a regression test since my\n scenario involves a C module ... I didn't see such a\n regression test, but somebody can teach me where to put the\n C module and Makefile if a regression test should be added.\n Heads up that the scenario to hit this is a bit complex (to\n me) ... I attach the following unit test files:\n1) eat_rollback.c, _PG_init() installs my_utility_hook()\n for INFO log for debugging.\nRegisterSubXactCallback(mySubtransactionCallback)\n which injects some logging and an ERROR for savepoints,\n i.e., negative testing, e.g., like a remote FDW savepoint\n failed.\n \nAnd\n RegisterXactTransaction(myTransactionCallback) with info\n logging.\n \n2) Makefile, to make the eat_rollback.so\n3) eat_rollback2.sql, drives the fail sequence,\n especially the SAVEPOINT, which errors out, then later a\n successful ROLLBACK gets incorrectly eaten (no CALLBACK info\n logging, demonstrates the bug), then another successful\n ROLLBACK (now there is CALLBACK info logging).\n4) eat_rollback2.out, output without the fix,\n the rollback at line 68 is successful but there is not\n myTransactionCallback() INFO output\n5) eat_rollback2.fixed, output after the fix,\n the rollback at line 68 is successful, and now there is a\n myTransactionCallback() INFO log. Success!\n \nIt first failed for me in v11.1, I suspect it\n failed since before then too. And it is still failing in\n current master.\n \nBye the way, we worked around the bug in our\n FDW by handling sub/xact in the utility hook. A transaction\n callback is still needed for implicit, internal rollbacks.\n Another advantage of the workaround is (I think) it reduces\n the code complexity and improves performance because the\n entire subxact stack is not unwound to drive each and every\n subtransaction level to remote. Also sub/transaction\n statements are sent remote as they arrive (local and remote\n are more \"transactionally\" synced, not stacked by FDW for\n later). And of course, the workaround doesn't hit the above\n bug, since our utility hook correctly handles the client\n ROLLBACK. If it makes sense to the community, I could try\n and patch contrib/postgres_fdw to do what we are doing. But\n note that I don't need it myself: we have our own new FDW\n for remote DB2 z/OS (!) ... at LzLabs we are a building a\n software defined mainframe with PostgreSQL (of all things).\n \nHope it helps!\n \nDave\n Sharpe\nI don't necessarily agree with everything I\n say.\n (MM)\nwww.lzlabs.com\n\n\n\n\nHi,\n\n\nAs I'm involved in this thread I have given\n a review to this bug report and I don't think that there is a\n problem here but as a disclaimer my knowledge\n on internal transaction management is probably not enough to\n have a definitive opinion.\n\n\n Actually the callback function is called when the error is\n thrown:\n\npsql:eat_rollback2.sql:20: INFO:  00000:\n myTransactionCallback() XactEvent 2 (is abort) level 1\n <-------------------\n LOCATION:  myTransactionCallback, eat_rollback.c:52\n psql:eat_rollback2.sql:20: ERROR:  XX000: no no no\n LOCATION:  mySubtransactionCallback, eat_rollback.c:65\n\n\n\n\nthis\n is probably why the callback is not called on the subsequent\n ROLLBACK execution because abort processing is already done\n (src/backend/access/transam/xact.c:3890).\n\n\nWith this\n simple test and the debug extension:\n\nBEGIN;\n DROP INDEX \"index2\", \"index3\"; -- will fail indexes doesn't\n exist\n ROLLBACK;\n\n\n\nthe\n callback function is also called at error level, not on the\n ROLLBACK:\n\n---------------------------------------------------------------------------------BEGIN\npsql:eat_rollback-2/sql/eat_rollback3.sql:39:\n INFO:  00000: my_utility_hook() utility transaction kind 0 (is\n not rollback)\nLOCATION: \n my_utility_hook, eat_rollback.c:28\nBEGIN\n---------------------------------------------------------------------------------DROP\n INDEX + error\npsql:eat_rollback-2/sql/eat_rollback3.sql:41:\n INFO:  00000: myTransactionCallback() XactEvent 2 (is abort)\n level 1 <---------------\nLOCATION: \n myTransactionCallback, eat_rollback.c:52\npsql:eat_rollback-2/sql/eat_rollback3.sql:41:\n ERROR:  42704: index \"concur_index2\" does not exist\nLOCATION: \n DropErrorMsgNonExistent, tablecmds.c:1209\n---------------------------------------------------------------------------------ROLLBACK\npsql:eat_rollback-2/sql/eat_rollback3.sql:43:\n INFO:  00000: my_utility_hook() utility transaction kind 3 (is\n rollback)\nLOCATION: \n my_utility_hook, eat_rollback.c:28\nROLLBACK\n\n\n\n\n\n\nSo this\n is probably the expected behavior , however the proposed patch\n will throw the callback function two times which might not be\n what we want.\n\n\n\n---------------------------------------------------------------------------------BEGIN\npsql:eat_rollback-2/sql/eat_rollback3.sql:39:\n INFO:  00000: my_utility_hook() utility transaction kind 0 (is\n not rollback)\nLOCATION: \n my_utility_hook, eat_rollback.c:28\nBEGIN\n---------------------------------------------------------------------------------DROP\n INDEX + error\npsql:eat_rollback-2/sql/eat_rollback3.sql:41:\n INFO:  00000: myTransactionCallback() XactEvent 2 (is abort)\n level 1 <-------------- first call\nLOCATION: \n myTransactionCallback, eat_rollback.c:52\npsql:eat_rollback-2/sql/eat_rollback3.sql:41:\n ERROR:  42704: index \"concur_index2\" does not exist\nLOCATION: \n DropErrorMsgNonExistent, tablecmds.c:1209\n---------------------------------------------------------------------------------ROLLBACK\npsql:eat_rollback-2/sql/eat_rollback3.sql:43:\n INFO:  00000: my_utility_hook() utility transaction kind 3 (is\n rollback) <-------------- second call\nLOCATION: \n my_utility_hook, eat_rollback.c:28\npsql:eat_rollback-2/sql/eat_rollback3.sql:43:\n INFO:  00000: myTransactionCallback() XactEvent 2 (is abort)\n level 1\nLOCATION: \n myTransactionCallback, eat_rollback.c:52\nROLLBACK\n\n\n\nBut again\n I may be wrong.\n\n\n\n-- \nGilles Darold\nhttp://www.darold.net/", "msg_date": "Thu, 26 Mar 2020 17:09:04 +0100", "msg_from": "Gilles Darold <gilles@darold.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Fix CommitTransactionCommand() to CallXactCallbacks() in\n TBLOCK_ABORT_END" }, { "msg_contents": "From Gilles Darold <gilles@darold.net> on 2020-03-26T16:09:04.\r\n> Actually the callback function is called when the error is thrown:\r\n\r\n> psql:eat_rollback2.sql:20: INFO: 00000: myTransactionCallback() XactEvent 2 (is abort) level 1 <-----------------\r\n> LOCATION: myTransactionCallback, eat_rollback.c:52\r\n> psql:eat_rollback2.sql:20: ERROR: XX000: no no no\r\n> LOCATION: mySubtransactionCallback, eat_rollback.c:65\r\n\r\n> this is probably why the callback is not called on the subsequent ROLLBACK execution because abort processing is\r\n> already done (src/backend/access/transam/xact.c:3890).\r\nSo I withdraw this patch and fix. The callback during the error will drive the ROLLBACK remote, as required in the fdw.\r\nGreat catch, thanks Gilles!\r\n\r\nCheers,\r\nDave\r\n\r\n\n\n\n\n\n\n\n\n\nFrom Gilles Darold <gilles@darold.net> on 2020-03-26T16:09:04.\n> \r\nActually the callback function is called when the error is thrown:\n\n\n> \r\npsql:eat_rollback2.sql:20: INFO:  00000: myTransactionCallback() XactEvent 2 (is abort) level 1 <-----------------\n> LOCATION:  myTransactionCallback, eat_rollback.c:52\n> psql:eat_rollback2.sql:20: ERROR:  XX000: no no no\n> LOCATION:  mySubtransactionCallback, eat_rollback.c:65\n\n> \r\nthis is probably why the callback is not called on the subsequent ROLLBACK execution because abort processing is\r\n>  already done (src/backend/access/transam/xact.c:3890).\nSo I withdraw this patch and fix. The callback during the error will drive the ROLLBACK remote, as required in the fdw.\nGreat catch, thanks Gilles!\n \nCheers,\nDave", "msg_date": "Sat, 28 Mar 2020 13:23:48 +0000", "msg_from": "Dave Sharpe <dave.sharpe@lzlabs.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Fix CommitTransactionCommand() to CallXactCallbacks() in\n TBLOCK_ABORT_END" } ]
[ { "msg_contents": "Hi hackers,\n\nPlaying with my undo-log storage, I found out that its performance is \nmostly limited by generic WAL record mechanism,\nand particularly by computeRegionDelta function which  computes page \ndelta for each logged operation.\n\nI noticed that computeRegionDelta also becomes bottleneck in other cases \nwhere generic WAL records are used, for example in RUM extension.\nThis is profile of inserting records in table with RUM index:\n\n32.99%  postgres  postgres            [.] computeRegionDelta\n    6.13%  postgres  rum.so              [.] updateItemIndexes\n    4.61%  postgres  postgres            [.] hash_search_with_hash_value\n    4.53%  postgres  postgres            [.] GenericXLogRegisterBuffer\n    3.74%  postgres  rum.so              [.] rumTraverseLock\n    3.33%  postgres  rum.so              [.] rumtuple_get_attrnum\n    3.24%  postgres  rum.so              [.] dataPlaceToPage\n    3.14%  postgres  postgres            [.] writeFragment\n    2.99%  postgres  libc-2.23.so        [.] __memcpy_avx_unaligned\n    2.81%  postgres  postgres            [.] nocache_index_getattr\n    2.72%  postgres  rum.so              [.] rumPlaceToDataPageLeaf\n    1.93%  postgres  postgres            [.] pg_comp_crc32c_sse42\n    1.87%  postgres  rum.so              [.] findInLeafPage\n    1.77%  postgres  postgres            [.] PinBuffer\n    1.52%  postgres  rum.so              [.] compareRumItem\n    1.49%  postgres  postgres            [.] FunctionCall2Coll\n    1.34%  postgres  rum.so              [.] entryLocateEntry\n    1.22%  postgres  libc-2.23.so        [.] __memcmp_sse4_1\n    0.97%  postgres  postgres            [.] LWLockAttemptLock\n\n\nI noticed that computeRegionDelta performs byte-by-byte comparison of page.\nThe obvious optimization is to compare words instead of bytes.\nSmall patch with such optimization is attached.\nDefinitely it may lead to small increase of produced deltas.\nIt is possible to calculate deltas more precisely: using work comparison \nfor raw  location of region and then locate precise boundaries using bye \ncomparisons.\nBut it complicates algorithm and so makes it slower/\nIn practice, taken in account that header of record in Postgres is 24 \nbytes long and fields are usually aligned on 4/8 bytes boundary,\nI think that calculating deltas in words is preferable.\n\nResults of such optimization:\nPerformance of my UNDAM storage is increased from 6500 TPS to 7000 TPS \n(vs. 8500 for unlogged table),\nand computeRegionDelta completely disappears from  RUM profile:\n\n    9.37%  postgres  rum.so              [.] updateItemIndexes ▒\n    6.57%  postgres  postgres            [.] GenericXLogRegisterBuffer ▒\n    5.85%  postgres  postgres            [.] hash_search_with_hash_value ▒\n    5.54%  postgres  rum.so              [.] rumTraverseLock ▒\n    5.09%  postgres  rum.so              [.] dataPlaceToPage ▒\n    4.85%  postgres  postgres            [.] computeRegionDelta ▒\n    4.78%  postgres  rum.so              [.] rumtuple_get_attrnum ▒\n    4.28%  postgres  postgres            [.] nocache_index_getattr ▒\n    4.23%  postgres  rum.so              [.] rumPlaceToDataPageLeaf ▒\n    3.39%  postgres  postgres            [.] pg_comp_crc32c_sse42 ▒\n    3.16%  postgres  libc-2.23.so        [.] __memcpy_avx_unaligned ▒\n    2.72%  postgres  rum.so              [.] findInLeafPage ▒\n    2.64%  postgres  postgres            [.] PinBuffer ▒\n    2.22%  postgres  postgres            [.] FunctionCall2Coll ▒\n    2.22%  postgres  rum.so              [.] compareRumItem ▒\n    1.91%  postgres  rum.so              [.] entryLocateEntry ▒\n\nBut... time of RUN insertion almost not changed: 1770 seconds vs. 1881 \nseconds.\nLooks like it was mostly limited by time of writing data to the disk.\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Wed, 25 Mar 2020 11:45:47 +0300", "msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Small computeRegionDelta optimization." } ]
[ { "msg_contents": "Hi,\n\nI found that things could go wrong in some cases, when the unique index and the partition key use different opclass.\n\nFor example:\n```\nCREATE TABLE ptop_test (a int, b int, c int) PARTITION BY LIST (a);\nCREATE TABLE ptop_test_p1 PARTITION OF ptop_test FOR VALUES IN ('1');\nCREATE TABLE ptop_test_m1 PARTITION OF ptop_test FOR VALUES IN ('-1');\nCREATE UNIQUE INDEX ptop_test_unq_abs_a ON ptop_test (a abs_int_btree_ops); -- this should fail\n```\nIn this example, `abs_int_btree_ops` is a opclass whose equality operator is customized to consider ‘-1’ and ‘1’ as equal.\nSo the unique index should not be allowed to create, since ‘-1’ and ‘1’ will be put in different partition.\n\nThe attached patch should fix this problem.\n\n\n\n\nBest Regards,\nGuancheng Luo", "msg_date": "Wed, 25 Mar 2020 18:43:43 +0800", "msg_from": "Guancheng Luo <prajnamort@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Check operator when creating unique index on partition table" }, { "msg_contents": "Guancheng Luo <prajnamort@gmail.com> writes:\n> I found that things could go wrong in some cases, when the unique index and the partition key use different opclass.\n\nI agree that this is an oversight, but it seems like your solution is\novercomplicated and probably still too forgiving. Should we not just\ninsist that the index opfamily match the partition key opfamily?\nIt looks to me like that would reduce the code change to about like\nthis:\n\n- if (key->partattrs[i] == indexInfo->ii_IndexAttrNumbers[j])\n+ if (key->partattrs[i] == indexInfo->ii_IndexAttrNumbers[j] &&\n+ key->partopfamily[i] == get_opclass_family(classObjectId[j]))\n\nwhich is a lot more straightforward and provides a lot more certainty\nthat the index will act as the partition constraint demands.\n\nThis would reject, for example, a hash index associated with a btree-based\npartition constraint, but I'm not sure we're losing anything much thereby.\n(I do not think your patch is correct for the case where the opfamilies\nbelong to different AMs, anyway.)\n\nI'm not really on board with adding a whole new test script for this,\neither.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 13:00:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check operator when creating unique index on partition\n table" }, { "msg_contents": "> On Mar 26, 2020, at 01:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Guancheng Luo <prajnamort@gmail.com> writes:\n>> I found that things could go wrong in some cases, when the unique index and the partition key use different opclass.\n> \n> I agree that this is an oversight, but it seems like your solution is\n> overcomplicated and probably still too forgiving. Should we not just\n> insist that the index opfamily match the partition key opfamily?\n> It looks to me like that would reduce the code change to about like\n> this:\n> \n> - if (key->partattrs[i] == indexInfo->ii_IndexAttrNumbers[j])\n> + if (key->partattrs[i] == indexInfo->ii_IndexAttrNumbers[j] &&\n> + key->partopfamily[i] == get_opclass_family(classObjectId[j]))\n> \n> which is a lot more straightforward and provides a lot more certainty\n> that the index will act as the partition constraint demands.\n> \n> This would reject, for example, a hash index associated with a btree-based\n> partition constraint, but I'm not sure we're losing anything much thereby.\n> (I do not think your patch is correct for the case where the opfamilies\n> belong to different AMs, anyway.)\n\nSince unique index cannot be using HASH, I think we only need to consider BTREE index here.\n\nThere is cases when a BTREE index associated with a HASH partition key, but I think we should allow them,\nas long as their equality operators consider the same value as equal.\nI’ve added some more test for this case.\n\n> I'm not really on board with adding a whole new test script for this,\n> either.\n\nIndeed, I think `indexing.sql` might be more apporiate. I moved these tests in my new patch.\n\n\n\n\n\nBest Regards,\nGuancheng Luo", "msg_date": "Sat, 28 Mar 2020 11:28:41 +0800", "msg_from": "Guancheng Luo <prajnamort@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Check operator when creating unique index on partition\n table" }, { "msg_contents": "Guancheng Luo <prajnamort@gmail.com> writes:\n> On Mar 26, 2020, at 01:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This would reject, for example, a hash index associated with a btree-based\n>> partition constraint, but I'm not sure we're losing anything much thereby.\n\n> There is cases when a BTREE index associated with a HASH partition key, but I think we should allow them,\n> as long as their equality operators consider the same value as equal.\n\nAh, yeah, I see we already have regression test cases that require that.\n\nI pushed the patch with some cosmetic revisions. I left out the\nregression test case though; it seemed pretty expensive considering\nthat the code is already being exercised by existing cases.\n\nThanks for the report and patch!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Apr 2020 14:53:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Check operator when creating unique index on partition\n table" } ]
[ { "msg_contents": "Hello dear Hackers,\nI encountered a strange issue with generated column and inheritance.\nThe core of the problem is that when inheriting from a table, you can\nre-declare the same column (provided it has the same type).\n\nBut when doing this and introducing a generated column, an error is raised\n`[0A000] ERROR: cannot use column reference in DEFAULT expression`\n\nThis is too bad, as generated columns are an awesome new feature.\nI don't think that is an expected behavior, and at the very least the\nerror is misleading.\n\nHere is a super short synthetic demo, along with 2 workarounds\n```SQL\n---- testing an issue with inheritance and generated column\nDROP SCHEMA IF EXISTS test_inheritance_and_generated_column CASCADE ;\nCREATE SCHEMA IF NOT EXISTS test_inheritance_and_generated_column ;\n\n----------------------------------------------------------------------------------------------------\n--- ERROR (set a generated column in an inherited table)\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table\nCASCADE;\nCREATE TABLE test_inheritance_and_generated_column.parent_table(\n science_grade float DEFAULT 0.7\n , avg_grade float\n);\n-- THIS RAISE THE ERROR : [0A000] ERROR: cannot use column reference in\nDEFAULT expression\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;\nCREATE TABLE test_inheritance_and_generated_column.child_basic(\n literature_grade float DEFAULT 0.3,\n -- avg_grade float is inherited\n avg_grade float GENERATED ALWAYS AS (\n(science_grade+literature_grade)/2.0 ) STORED\n)INHERITS (test_inheritance_and_generated_column.parent_table);\n------------------------------------------------------------------------------------------------------\n\n\n----------------------------------------------------------------------------------------------------\n--- WORKS (removing the column from parent)\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table\nCASCADE;\nCREATE TABLE test_inheritance_and_generated_column.parent_table(\n science_grade float DEFAULT 0.7\n-- , avg_grade float\n);\n--\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;\nCREATE TABLE test_inheritance_and_generated_column.child_basic(\n literature_grade float DEFAULT 0.3,\n avg_grade float GENERATED ALWAYS AS (\n(science_grade+literature_grade)/2.0 ) STORED\n)INHERITS (test_inheritance_and_generated_column.parent_table);\n------------------------------------------------------------------------------------------------------\n\n\n\n----------------------------------------------------------------------------------------------------\n-- THIS WORKS (droping inheritance, dropping column, creating column with\ngenerating, adding inheritance )\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table\nCASCADE;\nCREATE TABLE test_inheritance_and_generated_column.parent_table(\n science_grade float DEFAULT 0.7\n , avg_grade float\n);\n--\nDROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;\nCREATE TABLE test_inheritance_and_generated_column.child_basic(\n literature_grade float DEFAULT 0.3,\n avg_grade float\n)INHERITS (test_inheritance_and_generated_column.parent_table);\nALTER TABLE test_inheritance_and_generated_column.child_basic NO INHERIT\ntest_inheritance_and_generated_column.parent_table;\nALTER TABLE test_inheritance_and_generated_column.child_basic DROP COLUMN\navg_grade;\nALTER TABLE test_inheritance_and_generated_column.child_basic\n ADD COLUMN avg_grade float GENERATED ALWAYS AS (\n(science_grade+literature_grade)/2.0 ) STORED;\nALTER TABLE test_inheritance_and_generated_column.child_basic INHERIT\ntest_inheritance_and_generated_column.parent_table;\n----------------------------------------------------------------------------------------------------\n```\nPostgreSQL 12.2 (Ubuntu 12.2-2.pgdg18.04+1) on x86_64-pc-linux-gnu,\ncompiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit\n\nCheers,\nRemi-C\n\nHello dear Hackers, I encountered a strange issue with generated column and inheritance.The core of the problem is that when inheriting from a table, you can re-declare the same column (provided it has the same type).But when doing this and introducing a generated column, an error is raised`[0A000] ERROR: cannot use column reference in DEFAULT expression`This is too bad, as generated columns are an awesome new feature.I don't think that is an  expected behavior, and at the very least the error is misleading.Here is a super short synthetic demo, along with 2 workarounds```SQL---- testing an issue with inheritance and generated columnDROP SCHEMA IF EXISTS test_inheritance_and_generated_column CASCADE ;CREATE SCHEMA IF NOT EXISTS test_inheritance_and_generated_column ;------------------------------------------------------------------------------------------------------- ERROR (set a generated column in an inherited table)DROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table CASCADE;CREATE TABLE test_inheritance_and_generated_column.parent_table(   science_grade float DEFAULT 0.7    , avg_grade float);-- THIS RAISE THE ERROR : [0A000] ERROR: cannot use column reference in DEFAULT expressionDROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;CREATE TABLE test_inheritance_and_generated_column.child_basic(   literature_grade float DEFAULT 0.3,   -- avg_grade float is inherited   avg_grade float GENERATED ALWAYS AS ( (science_grade+literature_grade)/2.0 ) STORED)INHERITS (test_inheritance_and_generated_column.parent_table);------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- WORKS (removing the column from parent)DROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table CASCADE;CREATE TABLE test_inheritance_and_generated_column.parent_table(   science_grade float DEFAULT 0.7--     , avg_grade float);--DROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;CREATE TABLE test_inheritance_and_generated_column.child_basic(   literature_grade float DEFAULT 0.3,   avg_grade float GENERATED ALWAYS AS ( (science_grade+literature_grade)/2.0 ) STORED)INHERITS (test_inheritance_and_generated_column.parent_table);------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ THIS WORKS (droping inheritance, dropping column, creating column with generating, adding inheritance )DROP TABLE IF EXISTS test_inheritance_and_generated_column.parent_table CASCADE;CREATE TABLE test_inheritance_and_generated_column.parent_table(   science_grade float DEFAULT 0.7    , avg_grade float);--DROP TABLE IF EXISTS test_inheritance_and_generated_column.child_basic;CREATE TABLE test_inheritance_and_generated_column.child_basic(   literature_grade float DEFAULT 0.3,   avg_grade float)INHERITS (test_inheritance_and_generated_column.parent_table);ALTER TABLE test_inheritance_and_generated_column.child_basic NO INHERIT test_inheritance_and_generated_column.parent_table;ALTER TABLE test_inheritance_and_generated_column.child_basic DROP COLUMN avg_grade;ALTER TABLE test_inheritance_and_generated_column.child_basic    ADD COLUMN avg_grade float GENERATED ALWAYS AS ( (science_grade+literature_grade)/2.0 ) STORED;ALTER TABLE test_inheritance_and_generated_column.child_basic INHERIT test_inheritance_and_generated_column.parent_table;----------------------------------------------------------------------------------------------------``` PostgreSQL 12.2 (Ubuntu 12.2-2.pgdg18.04+1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bitCheers,Remi-C", "msg_date": "Wed, 25 Mar 2020 12:00:21 +0100", "msg_from": "=?UTF-8?Q?R=C3=A9mi_Cura?= <remi.cura@gmail.com>", "msg_from_op": true, "msg_subject": "Generated column and inheritance: strange default error" } ]
[ { "msg_contents": "Hello,\n\nAfter writing an unreadable and stupidly long line for ldap \nauthentification in a \"pg_hba.conf\" file, I figured out that allowing \ncontinuations looked simple enough and should just be done.\n\nPatch attached.\n\n-- \nFabien.", "msg_date": "Wed, 25 Mar 2020 19:09:38 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Hi,\n\nOn Wed, Mar 25, 2020 at 07:09:38PM +0100, Fabien COELHO wrote:\n> \n> Hello,\n> \n> After writing an unreadable and stupidly long line for ldap authentification\n> in a \"pg_hba.conf\" file, I figured out that allowing continuations looked\n> simple enough and should just be done.\n\nI tried this briefly.\n\n> - Records cannot be continued across lines.\n> + Records can be backslash-continued across lines.\n\nMaybe say: \"lines ending with a backslash are logically continued on the next\nline\", or similar.\n\n> +\t\t\t/* else we have a continuation, just blank it and loop */\n> +\t\t\tcontinuations++;\n> +\t\t\t*curend++ = ' ';\n\nSince it puts a blank there, it creates a \"word\" boundary, which I gather\nworked for your use case. But I wonder whether it's needed to add a space (or\notherwise, document that lines cannot be split beween words?).\n\nYou might compare this behavior with that of makefiles (or find a better\nexample) which I happen to recall *don't* add a space; if you want that, you\nhave to end the line with: \" \\\" not just \"\\\".\n\nAnyway, I checked that the current patch handles users split across lines, like:\nalice,\\\nbob,\\\ncarol\n\nAs written, that depends on the parser's behavior of ignoring commas and\nblanks, since it sees:\n\"alice,[SPACE]bob,[SPACE]carol\"\n\nMaybe it'd be nice to avoid depending on that.\n\nI tried with a username called \"alice,bob\", split across lines:\n\n\"alice,\\\nbob\",\\\n\nBut then your patch makes it look for a user called \"alice, bob\" (with a\nspace). I realize that's not a compelling argument :)\n\nNote, that also appears to affect the \"username maps\" file. So mention in that\nchapter, too.\nhttps://www.postgresql.org/docs/current/auth-username-maps.html\n\nCheers,\n-- \nJustin\n\n\n", "msg_date": "Wed, 25 Mar 2020 14:39:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Hello Justin,\n\nthanks for the feedback.\n\n>> - Records cannot be continued across lines.\n>> + Records can be backslash-continued across lines.\n>\n> Maybe say: \"lines ending with a backslash are logically continued on the next\n> line\", or similar.\n\nI tried to change it along that.\n\n> Since it puts a blank there, it creates a \"word\" boundary, which I gather\n> worked for your use case. But I wonder whether it's needed to add a space (or\n> otherwise, document that lines cannot be split beween words?).\n\nHmmm. Ok, you are right. I hesitated while doing it. I removed the char \ninstead, so that it does not add a word break.\n\n> Note, that also appears to affect the \"username maps\" file. So mention \n> in that chapter, too. \n> https://www.postgresql.org/docs/current/auth-username-maps.html\n\nIndeed, the same tokenizer is used. I updated a sentence to point on \ncontinuations.\n\n-- \nFabien.", "msg_date": "Wed, 25 Mar 2020 21:45:40 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Hi Fabien,\r\nShould we consider the case \"\\ \", i.e. one or more spaces after the backslash?\r\nFor example, if I replace a user map \r\n\"mymap /^(.*)@mydomain\\.com$ \\1\" with \r\n\"mymap /^(.*)@mydomain\\.com$ \\ \"\r\n\"\\1\"\r\nby adding one extra space after the backslash, then I got the pg_role=\"\\\\\"\r\nbut I think what we expect is pg_role=\"\\\\1\"", "msg_date": "Thu, 02 Apr 2020 00:20:12 +0000", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "At Thu, 02 Apr 2020 00:20:12 +0000, David Zhang <david.zhang@highgo.ca> wrote in \n> Hi Fabien,\n> Should we consider the case \"\\ \", i.e. one or more spaces after the backslash?\n> For example, if I replace a user map \n> \"mymap /^(.*)@mydomain\\.com$ \\1\" with \n> \"mymap /^(.*)@mydomain\\.com$ \\ \"\n> \"\\1\"\n> by adding one extra space after the backslash, then I got the pg_role=\"\\\\\"\n> but I think what we expect is pg_role=\"\\\\1\"\n\nFWIW, I don't think so. Generally a trailing backspace is an escape\ncharacter for the following newline. And '\\ ' is a escaped space,\nwhich is usualy menas a space itself.\n\nIn this case escape character doesn't work generally and I think it is\nnatural that a backslash in the middle of a line is a backslash\ncharacter itself.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Apr 2020 13:25:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "\nHello,\n\n> FWIW, I don't think so. Generally a trailing backspace is an escape\n> character for the following newline. And '\\ ' is a escaped space,\n> which is usualy menas a space itself.\n>\n> In this case escape character doesn't work generally and I think it is \n> natural that a backslash in the middle of a line is a backslash \n> character itself.\n\nI concur: The backslash char is only a continuation as the very last \ncharacter of the line, before cr/nl line ending markers.\n\nThere are no assumption about backslash escaping, quotes and such, which \nseems reasonable given the lexing structure of the files, i.e. records of \nspace-separated words, and # line comments.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 2 Apr 2020 07:25:36 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "On Thu, Apr 02, 2020 at 07:25:36AM +0200, Fabien COELHO wrote:\n> \n> Hello,\n> \n> > FWIW, I don't think so. Generally a trailing backspace is an escape\n> > character for the following newline. And '\\ ' is a escaped space,\n> > which is usualy menas a space itself.\n> > \n> > In this case escape character doesn't work generally and I think it is\n> > natural that a backslash in the middle of a line is a backslash\n> > character itself.\n> \n> I concur: The backslash char is only a continuation as the very last\n> character of the line, before cr/nl line ending markers.\n> \n> There are no assumption about backslash escaping, quotes and such, which\n> seems reasonable given the lexing structure of the files, i.e. records of\n> space-separated words, and # line comments.\n\nQuoting does allow words containing spaces:\n\nhttps://www.postgresql.org/docs/current/auth-pg-hba-conf.html\n|A record is made up of a number of fields which are separated by spaces and/or\n|tabs. Fields can contain white space if the field value is double-quoted.\n|Quoting one of the keywords in a database, user, or address field (e.g., all or\n|replication) makes the word lose its special meaning, and just match a\n|database, user, or host with that name.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 2 Apr 2020 00:58:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "\nHi Justin,\n\n>> There are no assumption about backslash escaping, quotes and such, which\n>> seems reasonable given the lexing structure of the files, i.e. records of\n>> space-separated words, and # line comments.\n>\n> Quoting does allow words containing spaces:\n\nOk.\n\nI meant that the continuation handling does not care of that, i.e. if the \ncontinuation is within quotes, then the quoted stuff is implicitely \ncontinuated, there is no different rule because it is within quotes.\n\n-- \nFabien.\n\n\n", "msg_date": "Thu, 2 Apr 2020 08:46:08 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "On 2020-04-01 10:25 p.m., Fabien COELHO wrote:\n\n>\n> Hello,\n>\n>> FWIW, I don't think so. Generally a trailing backspace is an escape\n>> character for the following newline.  And '\\ ' is a escaped space,\n>> which is usualy menas a space itself.\n>>\n>> In this case escape character doesn't work generally and I think it \n>> is natural that a backslash in the middle of a line is a backslash \n>> character itself.\n>\n> I concur: The backslash char is only a continuation as the very last \n> character of the line, before cr/nl line ending markers.\n\n+Agree. However, it would nice to update the sentence below if I \nunderstand it correctly.\n\n\"+   Comments, whitespace and continuations are handled in the same way \nas in\" pg_hba.conf\n\nFor example, if a user provide a configuration like below (even such a \ncomments is not recommended)\n\n\"host    all     all     127.0.0.1/32    trust  #COMMENTS, it works\"\n\ni.e. the original pg_hba.conf allows to have comments in each line, but \nwith new continuation introduced, the comments has to be put to the last \nline.\n\n>\n> There are no assumption about backslash escaping, quotes and such, \n> which seems reasonable given the lexing structure of the files, i.e. \n> records of space-separated words, and # line comments.\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:05:17 -0700", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Hello David,\n\n> +Agree. However, it would nice to update the sentence below if I understand \n> it correctly.\n>\n> \"+   Comments, whitespace and continuations are handled in the same way as \n> in\" pg_hba.conf\n\nIn the attached v3, I've tried to clarify comments and doc about \ntokenization rules relating to comments, strings and continuations.\n\n-- \nFabien.", "msg_date": "Fri, 3 Apr 2020 07:46:32 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "> In the attached v3, I've tried to clarify comments and doc about tokenization \n> rules relating to comments, strings and continuations.\n\nAttached v4 improves comments & doc as suggested by Justin.\n\n-- \nFabien.", "msg_date": "Mon, 6 Jul 2020 17:38:14 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> [ pg-hba-cont-4.patch ]\n\nI looked this over and I think it seems reasonable, but there's\nsomething else we should do. If people are writing lines long\nenough that they need to continue them, how long will it be\nbefore they overrun the line length limit? Admittedly, there's\na good deal of daylight between 80 characters and 8K, but if\nwe're busy removing restrictions on password length in an adjacent\nthread [1], I think we ought to get rid of pg_hba.conf's line length\nrestriction while we're at it.\n\nAccordingly, I borrowed some code from that thread and present\nthe attached revision. I also added some test coverage, since\nthat was lacking before, and wordsmithed docs and comments slightly.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/09512C4F-8CB9-4021-B455-EF4C4F0D55A0%40amazon.com", "msg_date": "Wed, 02 Sep 2020 14:53:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "I wrote:\n> Accordingly, I borrowed some code from that thread and present\n> the attached revision. I also added some test coverage, since\n> that was lacking before, and wordsmithed docs and comments slightly.\n\nHearing no comments, pushed that way.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Sep 2020 12:17:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "\nHello Tom,\n\n>> Accordingly, I borrowed some code from that thread and present\n>> the attached revision. I also added some test coverage, since\n>> that was lacking before, and wordsmithed docs and comments slightly.\n>\n> Hearing no comments, pushed that way.\n\nThanks for the fixes and improvements!\n\nI notice that buf.data is not freed. I guess that the server memory \nmanagement will recover it.\n\n-- \nFabien.\n\n\n", "msg_date": "Fri, 4 Sep 2020 16:05:02 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": true, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" }, { "msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> I notice that buf.data is not freed. I guess that the server memory \n> management will recover it.\n\nYeah, it's in the transient context holding all of the results of\nreading the file. I considered pfree'ing it at the end of the\nfunction, but I concluded there's no point. The space will be\nrecycled when the context is destroyed, and since we're not (IIRC)\ngoing to allocate anything more in that context, nothing would be\ngained by freeing it earlier --- it'd just stay as unused memory\nwithin the context.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Sep 2020 11:00:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Allow continuations in \"pg_hba.conf\" files" } ]
[ { "msg_contents": "Provide a TLS init hook\n\nThe default hook function sets the default password callback function.\nIn order to allow preloaded libraries to have an opportunity to override\nthe default, TLS initialization if now delayed slightly until after\nshared preloaded libraries have been loaded.\n\nA test module is provided which contains a trivial example that decodes\nan obfuscated password for an SSL certificate.\n\nAuthor: Andrew Dunstan\nReviewed By: Andreas Karlsson, Asaba Takanori\nDiscussion: https://postgr.es/m/04116472-818b-5859-1d74-3d995aab2252@2ndQuadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/896fcdb230e729652d37270c8606ccdc45212f0d\n\nModified Files\n--------------\nsrc/backend/libpq/be-secure-openssl.c | 48 +++++++-----\nsrc/backend/postmaster/postmaster.c | 22 +++---\nsrc/include/libpq/libpq-be.h | 4 +\nsrc/test/modules/Makefile | 5 ++\n.../modules/ssl_passphrase_callback/.gitignore | 1 +\nsrc/test/modules/ssl_passphrase_callback/Makefile | 24 ++++++\n.../modules/ssl_passphrase_callback/server.crt | 19 +++++\n.../modules/ssl_passphrase_callback/server.key | 30 ++++++++\n.../ssl_passphrase_callback/ssl_passphrase_func.c | 88 ++++++++++++++++++++++\n.../ssl_passphrase_callback/t/001_testfunc.pl | 80 ++++++++++++++++++++\nsrc/tools/msvc/Mkvcbuild.pm | 2 +-\n11 files changed, 292 insertions(+), 31 deletions(-)", "msg_date": "Wed, 25 Mar 2020 21:32:18 +0000", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Provide a TLS init hook\n\nBuildfarm's not terribly happy --- I suspect that the makefile for\nthe new test module is failing to link in libopenssl explicitly.\nSome platforms are more forgiving of that than others.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 18:01:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "I wrote:\n> Buildfarm's not terribly happy --- I suspect that the makefile for\n> the new test module is failing to link in libopenssl explicitly.\n\nConcretely, I see that contrib/sslinfo has\n\nSHLIB_LINK += $(filter -lssl -lcrypto -lssleay32 -leay32, $(LIBS))\n\nwhich you probably need to crib here. There might be some analogous\nmagic in the MSVC files, too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 18:17:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "I wrote:\n> Concretely, I see that contrib/sslinfo has\n> SHLIB_LINK += $(filter -lssl -lcrypto -lssleay32 -leay32, $(LIBS))\n\nI verified that that fixes things on macOS and pushed it, along with\na couple other minor fixes.\n\nHowever, I'm quite desperately unhappy that the new test module\ndoes this:\n\n$node->append_conf('postgresql.conf', \"listen_addresses = 'localhost'\");\n\nThat's opening a security hole. Note that we do *not* run src/test/ssl\nby default, and it has a README warning people not to run it on multiuser\nsystems. It seems 100% unacceptable for this test to fire up a similarly\ninsecure server without so much as a by-your-leave.\n\nI don't actually see why we need the localhost port at all --- it doesn't\nlook like this test ever attempts to connect to the server. So couldn't\nwe just drop that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 19:44:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\nOn 3/25/20 7:44 PM, Tom Lane wrote:\n> I wrote:\n>> Concretely, I see that contrib/sslinfo has\n>> SHLIB_LINK += $(filter -lssl -lcrypto -lssleay32 -leay32, $(LIBS))\n> I verified that that fixes things on macOS and pushed it, along with\n> a couple other minor fixes.\n\n\nThanks.\n\n\n>\n> However, I'm quite desperately unhappy that the new test module\n> does this:\n>\n> $node->append_conf('postgresql.conf', \"listen_addresses = 'localhost'\");\n>\n> That's opening a security hole. Note that we do *not* run src/test/ssl\n> by default, and it has a README warning people not to run it on multiuser\n> systems. It seems 100% unacceptable for this test to fire up a similarly\n> insecure server without so much as a by-your-leave.\n>\n> I don't actually see why we need the localhost port at all --- it doesn't\n> look like this test ever attempts to connect to the server. So couldn't\n> we just drop that?\n>\n> \t\t\t\n\n\n\nSeems reasonable. I just tested that and it seems quite happy, so I'll\nmake the change.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Wed, 25 Mar 2020 21:11:09 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/25/20 7:44 PM, Tom Lane wrote:\n>> I don't actually see why we need the localhost port at all --- it doesn't\n>> look like this test ever attempts to connect to the server. So couldn't\n>> we just drop that?\n\n> Seems reasonable. I just tested that and it seems quite happy, so I'll\n> make the change.\n\nCool, thanks.\n\njacana has just exposed a different problem: it's not configured\n--with-openssl, but the buildfarm script is trying to run this\nnew test module anyway. I'm confused about the reason.\n\"make installcheck\" in src/test/modules does the right thing,\nbut seemingly that client is doing something different?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 25 Mar 2020 21:28:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\nOn 3/25/20 9:28 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/25/20 7:44 PM, Tom Lane wrote:\n>>> I don't actually see why we need the localhost port at all --- it doesn't\n>>> look like this test ever attempts to connect to the server. So couldn't\n>>> we just drop that?\n>> Seems reasonable. I just tested that and it seems quite happy, so I'll\n>> make the change.\n> Cool, thanks.\n>\n> jacana has just exposed a different problem: it's not configured\n> --with-openssl, but the buildfarm script is trying to run this\n> new test module anyway. I'm confused about the reason.\n> \"make installcheck\" in src/test/modules does the right thing,\n> but seemingly that client is doing something different?\n>\n> \t\t\t\n\n\n\nUgh. I have put in place a hack to clear the error on jacana. Yes, the\nclient does something different so we can run each module separately.\nTrawling through the output and files for one test on its own is hard\nenough, I don't want to aggregate them.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 07:01:39 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/25/20 9:28 PM, Tom Lane wrote:\n>> jacana has just exposed a different problem: it's not configured\n>> --with-openssl, but the buildfarm script is trying to run this\n>> new test module anyway. I'm confused about the reason.\n>> \"make installcheck\" in src/test/modules does the right thing,\n>> but seemingly that client is doing something different?\n\n> Ugh. I have put in place a hack to clear the error on jacana. Yes, the\n> client does something different so we can run each module separately.\n> Trawling through the output and files for one test on its own is hard\n> enough, I don't want to aggregate them.\n\nWell, I'm confused, because my own critters are running this as part\nof a single make-installcheck-in-src/test/modules step, eg\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2020-03-26%2002%3A09%3A08&stg=testmodules-install-check-C\n\nWhy is jacana doing it differently?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 09:50:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\nOn 3/26/20 9:50 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/25/20 9:28 PM, Tom Lane wrote:\n>>> jacana has just exposed a different problem: it's not configured\n>>> --with-openssl, but the buildfarm script is trying to run this\n>>> new test module anyway. I'm confused about the reason.\n>>> \"make installcheck\" in src/test/modules does the right thing,\n>>> but seemingly that client is doing something different?\n>> Ugh. I have put in place a hack to clear the error on jacana. Yes, the\n>> client does something different so we can run each module separately.\n>> Trawling through the output and files for one test on its own is hard\n>> enough, I don't want to aggregate them.\n> Well, I'm confused, because my own critters are running this as part\n> of a single make-installcheck-in-src/test/modules step, eg\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2020-03-26%2002%3A09%3A08&stg=testmodules-install-check-C\n>\n> Why is jacana doing it differently?\n\n\n\nlongfin is also running it (first) here\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2020-03-26%2014%3A39%3A51&stg=ssl_passphrase_callback-check\n\n\nThat's where jacana failed.\n\n\nI don't think this belongs in installcheck, we should add\n'NO_INSTALLCHECK = 1' to the Makefile.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 11:11:08 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/26/20 9:50 AM, Tom Lane wrote:\n>> Why is jacana doing it differently?\n\n> longfin is also running it (first) here\n> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2020-03-26%2014%3A39%3A51&stg=ssl_passphrase_callback-check\n\nOh, I missed that. Isn't that pretty duplicative of the\ntestmodules-install phase?\n\n> I don't think this belongs in installcheck, we should add\n> 'NO_INSTALLCHECK = 1' to the Makefile.\n\nWhy? The other src/test/modules/ modules with TAP tests do not\nspecify that, with the exception of commit_ts which has a solid\ndoesnt-work-in-the-default-configuration excuse.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Mar 2020 11:31:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\nOn 3/26/20 11:31 AM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/26/20 9:50 AM, Tom Lane wrote:\n>>> Why is jacana doing it differently?\n>> longfin is also running it (first) here\n>> https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=longfin&dt=2020-03-26%2014%3A39%3A51&stg=ssl_passphrase_callback-check\n> Oh, I missed that. Isn't that pretty duplicative of the\n> testmodules-install phase?\n\n\nYes, but see below\n\n\n>\n>> I don't think this belongs in installcheck, we should add\n>> 'NO_INSTALLCHECK = 1' to the Makefile.\n> Why? The other src/test/modules/ modules with TAP tests do not\n> specify that, with the exception of commit_ts which has a solid\n> doesnt-work-in-the-default-configuration excuse.\n>\n> \t\t\t\n\n\n\nThat seems wrong, installcheck should be testing against an installed\ninstance, and the TAP tests don't. Moreover, from the buildfarm's POV\nit's completely wrong, as we call the installcheck targets multiple\ntimes, once for each configured locale. See one of the animals that\ntests multiple locales (e.g. crake or prion)\n\n\nsrc/test is a mess, TBH, and I have spent quite some time trying to get\nit so that we test everything but without duplication, clearly without\ncomplete success.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Thu, 26 Mar 2020 16:07:42 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/26/20 11:31 AM, Tom Lane wrote:\n>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>> I don't think this belongs in installcheck, we should add\n>>> 'NO_INSTALLCHECK = 1' to the Makefile.\n\n>> Why? The other src/test/modules/ modules with TAP tests do not\n>> specify that, with the exception of commit_ts which has a solid\n>> doesnt-work-in-the-default-configuration excuse.\n\n> That seems wrong, installcheck should be testing against an installed\n> instance, and the TAP tests don't.\n\nSo? We clearly document that for the TAP tests, \"make installcheck\"\nmeans \"use the installed executables, but run a new instance\" [1].\n\n> Moreover, from the buildfarm's POV\n> it's completely wrong, as we call the installcheck targets multiple\n> times, once for each configured locale. See one of the animals that\n> tests multiple locales (e.g. crake or prion)\n\nYeah. That's productive if you think the tests might be\nlocale-sensitive. I doubt that any of the ones under src/test/modules/\nactually are at the moment, so maybe this is a waste of buildfarm effort.\nBut I don't think that it's the place of the Makefiles to dictate such\npolicy, and especially not for them to do so by breaking the ability to\nuse \"make installcheck\" at all.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/regress-tap.html\n\n\n", "msg_date": "Thu, 26 Mar 2020 16:31:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\n[discussion from -committers]\n\n\nOn 3/26/20 4:31 PM, Tom Lane wrote:\n> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>> On 3/26/20 11:31 AM, Tom Lane wrote:\n>>> Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n>>>> I don't think this belongs in installcheck, we should add\n>>>> 'NO_INSTALLCHECK = 1' to the Makefile.\n>>> Why? The other src/test/modules/ modules with TAP tests do not\n>>> specify that, with the exception of commit_ts which has a solid\n>>> doesnt-work-in-the-default-configuration excuse.\n>> That seems wrong, installcheck should be testing against an installed\n>> instance, and the TAP tests don't.\n> So? We clearly document that for the TAP tests, \"make installcheck\"\n> means \"use the installed executables, but run a new instance\" [1].\n\n\n\nI think we were probably a bit shortsighted about that. But what's done\nis done. I wonder if there is a simple way we could turn it off for the\nbuildfarm?\n\n\n\n>> Moreover, from the buildfarm's POV\n>> it's completely wrong, as we call the installcheck targets multiple\n>> times, once for each configured locale. See one of the animals that\n>> tests multiple locales (e.g. crake or prion)\n> Yeah. That's productive if you think the tests might be\n> locale-sensitive. I doubt that any of the ones under src/test/modules/\n> actually are at the moment, so maybe this is a waste of buildfarm effort.\n> But I don't think that it's the place of the Makefiles to dictate such\n> policy, and especially not for them to do so by breaking the ability to\n> use \"make installcheck\" at all.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/docs/devel/regress-tap.html\n\n\nRight now the explicit TAP test code in the buildfarm knows how to collect all the relevant output. The installcheck code doesn't know about that for TAP tests. \n\nI get that developers want to be able to run tests in a small number of commands, but for the buildfarm I generally favor more disaggregated tests. That way if test X fails it's much easier to focus on the problem. (related note: I'm working on breaking up the log text blobs which will also help focussing on the right area).\n\nMaybe we need to take the discussion to -hackers.\n\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 27 Mar 2020 06:53:11 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> [discussion from -committers]\n\n> On 3/26/20 4:31 PM, Tom Lane wrote:\n>> So? We clearly document that for the TAP tests, \"make installcheck\"\n>> means \"use the installed executables, but run a new instance\" [1].\n\n> I think we were probably a bit shortsighted about that. But what's done\n> is done. I wonder if there is a simple way we could turn it off for the\n> buildfarm?\n\nI think it was entirely intentional. I use \"installcheck\" all the time\nto save the cost of repeatedly building an install tree. If anything,\nit's more important to be able to do that when running a specific\nsubdirectory's tests than when testing the whole tree, because the\noverhead would be worse in proportion. So sprinkling NO_INSTALLCHECK\nliberally would make me sad. (In fact, I wonder if we should treat\nthat as only disabling traditional-framework tests not TAP tests.\nThe problem of tests requiring atypical configurations doesn't apply\nto TAP tests.)\n\n> Right now the explicit TAP test code in the buildfarm knows how to collect all the relevant output. The installcheck code doesn't know about that for TAP tests. \n\nIt seems like what the buildfarm would like is a way to invoke TAP tests\nand traditional-framework tests separately, so that it could apply special\ntooling to the former. I'd have no objection to making that possible.\n\nAlternatively, maybe you could just dig through the tree after-the-fact\nlooking for tmp_check subdirectories, and capturing their contents?\n\nA separate issue is whether or not it's worth running all the\nsrc/test/modules/ tests over again for multiple locales. ISTM the\nanswer is mostly \"no\", but I grant that some of them might be locale\nsensitive. Maybe we could mark the ones that are in their Makefiles?\nGet the buildfarm to look for \"LOCALE_SENSITIVE = 1\" or the like.\nRight now, the modules/ tests don't run long enough for this to be super\nimportant, but we might be more worried about their cost in future.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 11:09:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "\nOn 3/27/20 11:09 AM, Tom Lane wrote:\n>\n>> Right now the explicit TAP test code in the buildfarm knows how to collect all the relevant output. The installcheck code doesn't know about that for TAP tests. \n> It seems like what the buildfarm would like is a way to invoke TAP tests\n> and traditional-framework tests separately, so that it could apply special\n> tooling to the former. I'd have no objection to making that possible.\n>\n\nExactly. I'll look into that, but I'm open to any ideas people have.\n\n\ncheers\n\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Fri, 27 Mar 2020 15:35:15 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" }, { "msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On 3/27/20 11:09 AM, Tom Lane wrote:\n>> It seems like what the buildfarm would like is a way to invoke TAP tests\n>> and traditional-framework tests separately, so that it could apply special\n>> tooling to the former. I'd have no objection to making that possible.\n\n> Exactly. I'll look into that, but I'm open to any ideas people have.\n\nWith the makefile infrastructure, the first thing that comes to mind\nis to support something like\n\n\tmake [install]check SKIP_TAP_TESTS=1\n\n\tmake [install]check SKIP_TRADITIONAL_TESTS=1\n\nDon't know what to do in the MSVC world.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 16:41:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Provide a TLS init hook" } ]
[ { "msg_contents": "I noticed in passing that backend/storage/page/README hadn't gotten the memo\nabout pg_checksums. The attached tiny diff adds a small mention of it.\n\ncheers ./daniel", "msg_date": "Thu, 26 Mar 2020 15:02:42 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "pg_checksums in backend/storage/page/README" }, { "msg_contents": "On Thu, Mar 26, 2020 at 3:02 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> I noticed in passing that backend/storage/page/README hadn't gotten the memo\n> about pg_checksums. The attached tiny diff adds a small mention of it.\n\nThat seems obvious enough. Pushed, thanks!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 26 Mar 2020 15:06:34 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: pg_checksums in backend/storage/page/README" }, { "msg_contents": "On Thu, Mar 26, 2020 at 03:02:42PM +0100, Daniel Gustafsson wrote:\n> I noticed in passing that backend/storage/page/README hadn't gotten the memo\n> about pg_checksums. The attached tiny diff adds a small mention of it.\n\nGood catch! LGTM\n\n\n", "msg_date": "Thu, 26 Mar 2020 15:12:02 +0100", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_checksums in backend/storage/page/README" }, { "msg_contents": "On Thu, Mar 26, 2020 at 03:06:34PM +0100, Magnus Hagander wrote:\n> That seems obvious enough. Pushed, thanks!\n\nThanks for the fix.\n--\nMichael", "msg_date": "Fri, 27 Mar 2020 16:39:55 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_checksums in backend/storage/page/README" } ]
[ { "msg_contents": "While reviewing the patch for \\gf, I noticed that \\gx does not have tab\ncompletion for its optional filename. Trivial patch attached. I would\nalso suggest this be backpatched.\n-- \nVik Fearing", "msg_date": "Thu, 26 Mar 2020 09:58:50 -0700", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Tab completion for \\gx" }, { "msg_contents": "\nPatch applied though PG 10, thanks.\n\n---------------------------------------------------------------------------\n\nOn Thu, Mar 26, 2020 at 09:58:50AM -0700, Vik Fearing wrote:\n> While reviewing the patch for \\gf, I noticed that \\gx does not have tab\n> completion for its optional filename. Trivial patch attached. I would\n> also suggest this be backpatched.\n> -- \n> Vik Fearing\n\n> diff --git a/src/bin/psql/tab-complete.c b/src/bin/psql/tab-complete.c\n> index ae35fa4aa9..7252b6c4e6 100644\n> --- a/src/bin/psql/tab-complete.c\n> +++ b/src/bin/psql/tab-complete.c\n> @@ -3882,7 +3882,7 @@ psql_completion(const char *text, int start, int end)\n> \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_routines, NULL);\n> \telse if (TailMatchesCS(\"\\\\sv*\"))\n> \t\tCOMPLETE_WITH_SCHEMA_QUERY(Query_for_list_of_views, NULL);\n> -\telse if (TailMatchesCS(\"\\\\cd|\\\\e|\\\\edit|\\\\g|\\\\i|\\\\include|\"\n> +\telse if (TailMatchesCS(\"\\\\cd|\\\\e|\\\\edit|\\\\g|\\\\gx|\\\\i|\\\\include|\"\n> \t\t\t\t\t\t \"\\\\ir|\\\\include_relative|\\\\o|\\\\out|\"\n> \t\t\t\t\t\t \"\\\\s|\\\\w|\\\\write|\\\\lo_import\"))\n> \t{\n\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n", "msg_date": "Tue, 31 Mar 2020 23:01:44 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Tab completion for \\gx" }, { "msg_contents": "On 4/1/20 5:01 AM, Bruce Momjian wrote:\n> \n> Patch applied though PG 10, thanks.\n\nThanks!\n-- \nVik Fearing\n\n\n", "msg_date": "Wed, 1 Apr 2020 09:11:51 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": true, "msg_subject": "Re: Tab completion for \\gx" } ]
[ { "msg_contents": "Twice in the past month [1][2], buildfarm member hoverfly has managed\nto reach the \"unreachable\" Assert(false) at the end of\nSyncRepGetSyncStandbysPriority.\n\nWhat seems likely to me, after quickly eyeballing the code, is that\nhoverfly is hitting the blatantly-obvious race condition in that function.\nNamely, that the second loop supposes that the state of the walsender\narray hasn't changed since the first loop.\n\nThe minimum fix for this, I suppose, would have the first loop capture\nthe sync_standby_priority value for each walsender along with what it's\nalready capturing. But I wonder if the whole function shouldn't be\nrewritten from scratch, because it seems like the algorithm is both\nexpensively brute-force and unintelligible, which is a sad combination.\nIt's likely that the number of walsenders would never be high enough\nthat efficiency could matter, but then couldn't we use an algorithm\nthat is less complicated and more obviously correct? (Because the\nalternative conclusion, if you reject the theory that a race is happening,\nis that the algorithm is just flat out buggy; something that's not too\neasy to disprove either.)\n\nAnother fairly dubious thing here is that whether or not *am_sync\ngets set depends not only on whether MyWalSnd is claiming to be\nsynchronous but on how many lower-numbered walsenders are too.\nIs that really the right thing?\n\nBut worse than any of that is that the return value seems to be\na list of walsender array indexes, meaning that the callers cannot\nuse it without making even stronger assumptions about the array\ncontents not having changed since the start of this function.\n\nIt sort of looks like the design is based on the assumption that\nthe array contents can't change while SyncRepLock is held ... but\nif that's the plan then why bother with the per-walsender spinlocks?\nIn any case this assumption seems to be failing, suggesting either\nthat there's a caller that's not holding SyncRepLock when it calls\nthis function, or that somebody is failing to take that lock while\nmodifying the array.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-02-29%2001%3A34%3A55\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2020-03-26%2013%3A51%3A15\n\n\n", "msg_date": "Thu, 26 Mar 2020 21:26:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "\n\nOn 2020/03/27 10:26, Tom Lane wrote:\n> Twice in the past month [1][2], buildfarm member hoverfly has managed\n> to reach the \"unreachable\" Assert(false) at the end of\n> SyncRepGetSyncStandbysPriority.\n\nWhen I search the past discussions, I found that Noah Misch reported\nthe same issue.\nhttps://www.postgresql.org/message-id/20200206074552.GB3326097@rfd.leadboat.com\n\n> What seems likely to me, after quickly eyeballing the code, is that\n> hoverfly is hitting the blatantly-obvious race condition in that function.\n> Namely, that the second loop supposes that the state of the walsender\n> array hasn't changed since the first loop.\n> \n> The minimum fix for this, I suppose, would have the first loop capture\n> the sync_standby_priority value for each walsender along with what it's\n> already capturing. But I wonder if the whole function shouldn't be\n> rewritten from scratch, because it seems like the algorithm is both\n> expensively brute-force and unintelligible, which is a sad combination.\n> It's likely that the number of walsenders would never be high enough\n> that efficiency could matter, but then couldn't we use an algorithm\n> that is less complicated and more obviously correct?\n\n+1 to rewrite the function with better algorithm.\n\n> (Because the\n> alternative conclusion, if you reject the theory that a race is happening,\n> is that the algorithm is just flat out buggy; something that's not too\n> easy to disprove either.)\n> \n> Another fairly dubious thing here is that whether or not *am_sync\n> gets set depends not only on whether MyWalSnd is claiming to be\n> synchronous but on how many lower-numbered walsenders are too.\n> Is that really the right thing?\n> \n> But worse than any of that is that the return value seems to be\n> a list of walsender array indexes, meaning that the callers cannot\n> use it without making even stronger assumptions about the array\n> contents not having changed since the start of this function.\n> \n> It sort of looks like the design is based on the assumption that\n> the array contents can't change while SyncRepLock is held ... but\n> if that's the plan then why bother with the per-walsender spinlocks?\n> In any case this assumption seems to be failing, suggesting either\n> that there's a caller that's not holding SyncRepLock when it calls\n> this function, or that somebody is failing to take that lock while\n> modifying the array.\n\nAs far as I read the code, that assumption seems still valid. But the problem\nis that each walsender updates MyWalSnd->sync_standby_priority at each\nconvenient timing, when SIGHUP is signaled. That is, at a certain moment,\nsome walsenders (also their WalSnd entries in shmem) work based on\nthe latest configuration but the others (also their WalSnd entries) work based\non the old one.\n\n\tlowest_priority = SyncRepConfig->nmembers;\n\tnext_highest_priority = lowest_priority + 1;\n\nSyncRepGetSyncStandbysPriority() calculates the lowest priority among\nall running walsenders as the above, by using the configuration info that\nthis walsender is based on. But this calculated lowest priority would be\ninvalid if other walsender is based on different (e.g., old) configuraiton.\nThis can cause the (other) walsender to have lower priority than\nthe calculated lowest priority and the second loop in\nSyncRepGetSyncStandbysPriority() to unexpectedly end.\n\nTherefore, the band-aid fix seems to be to set the lowest priority to\nvery large number at the beginning of SyncRepGetSyncStandbysPriority().\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Fri, 27 Mar 2020 13:54:25 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Fri, 27 Mar 2020 13:54:25 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> \n> \n> On 2020/03/27 10:26, Tom Lane wrote:\n> > Twice in the past month [1][2], buildfarm member hoverfly has managed\n> > to reach the \"unreachable\" Assert(false) at the end of\n> > SyncRepGetSyncStandbysPriority.\n> \n> When I search the past discussions, I found that Noah Misch reported\n> the same issue.\n> https://www.postgresql.org/message-id/20200206074552.GB3326097@rfd.leadboat.com\n> \n> > What seems likely to me, after quickly eyeballing the code, is that\n> > hoverfly is hitting the blatantly-obvious race condition in that\n> > function.\n> > Namely, that the second loop supposes that the state of the walsender\n> > array hasn't changed since the first loop.\n> > The minimum fix for this, I suppose, would have the first loop capture\n> > the sync_standby_priority value for each walsender along with what\n> > it's\n> > already capturing. But I wonder if the whole function shouldn't be\n> > rewritten from scratch, because it seems like the algorithm is both\n> > expensively brute-force and unintelligible, which is a sad\n> > combination.\n> > It's likely that the number of walsenders would never be high enough\n> > that efficiency could matter, but then couldn't we use an algorithm\n> > that is less complicated and more obviously correct?\n> \n> +1 to rewrite the function with better algorithm.\n> \n> > (Because the\n> > alternative conclusion, if you reject the theory that a race is\n> > happening,\n> > is that the algorithm is just flat out buggy; something that's not too\n> > easy to disprove either.)\n> > Another fairly dubious thing here is that whether or not *am_sync\n> > gets set depends not only on whether MyWalSnd is claiming to be\n> > synchronous but on how many lower-numbered walsenders are too.\n> > Is that really the right thing?\n> > But worse than any of that is that the return value seems to be\n> > a list of walsender array indexes, meaning that the callers cannot\n> > use it without making even stronger assumptions about the array\n> > contents not having changed since the start of this function.\n> > It sort of looks like the design is based on the assumption that\n> > the array contents can't change while SyncRepLock is held ... but\n> > if that's the plan then why bother with the per-walsender spinlocks?\n> > In any case this assumption seems to be failing, suggesting either\n> > that there's a caller that's not holding SyncRepLock when it calls\n> > this function, or that somebody is failing to take that lock while\n> > modifying the array.\n> \n> As far as I read the code, that assumption seems still valid. But the\n> problem\n> is that each walsender updates MyWalSnd->sync_standby_priority at each\n> convenient timing, when SIGHUP is signaled. That is, at a certain\n> moment,\n> some walsenders (also their WalSnd entries in shmem) work based on\n> the latest configuration but the others (also their WalSnd entries)\n> work based\n> on the old one.\n> \n> \tlowest_priority = SyncRepConfig->nmembers;\n> \tnext_highest_priority = lowest_priority + 1;\n> \n> SyncRepGetSyncStandbysPriority() calculates the lowest priority among\n> all running walsenders as the above, by using the configuration info\n> that\n> this walsender is based on. But this calculated lowest priority would\n> be\n> invalid if other walsender is based on different (e.g., old)\n> configuraiton.\n> This can cause the (other) walsender to have lower priority than\n> the calculated lowest priority and the second loop in\n> SyncRepGetSyncStandbysPriority() to unexpectedly end.\n> \n> Therefore, the band-aid fix seems to be to set the lowest priority to\n> very large number at the beginning of\n> SyncRepGetSyncStandbysPriority().\n\nOr just ignore impossible priorities as non-sync standby. Anyway the\nconfused state is fixed after all walsenders have loaded the new\nconfiguration.\n\nI remember that I posted a bandaid for maybe the same issue.\n\nhttps://www.postgresql.org/message-id/20200207.125251.146972241588695685.horikyota.ntt@gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Mar 2020 16:53:10 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Fri, 27 Mar 2020 at 13:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n>\n>\n> On 2020/03/27 10:26, Tom Lane wrote:\n> > Twice in the past month [1][2], buildfarm member hoverfly has managed\n> > to reach the \"unreachable\" Assert(false) at the end of\n> > SyncRepGetSyncStandbysPriority.\n>\n> When I search the past discussions, I found that Noah Misch reported\n> the same issue.\n> https://www.postgresql.org/message-id/20200206074552.GB3326097@rfd.leadboat.com\n>\n> > What seems likely to me, after quickly eyeballing the code, is that\n> > hoverfly is hitting the blatantly-obvious race condition in that function.\n> > Namely, that the second loop supposes that the state of the walsender\n> > array hasn't changed since the first loop.\n> >\n> > The minimum fix for this, I suppose, would have the first loop capture\n> > the sync_standby_priority value for each walsender along with what it's\n> > already capturing. But I wonder if the whole function shouldn't be\n> > rewritten from scratch, because it seems like the algorithm is both\n> > expensively brute-force and unintelligible, which is a sad combination.\n> > It's likely that the number of walsenders would never be high enough\n> > that efficiency could matter, but then couldn't we use an algorithm\n> > that is less complicated and more obviously correct?\n>\n> +1 to rewrite the function with better algorithm.\n>\n> > (Because the\n> > alternative conclusion, if you reject the theory that a race is happening,\n> > is that the algorithm is just flat out buggy; something that's not too\n> > easy to disprove either.)\n> >\n> > Another fairly dubious thing here is that whether or not *am_sync\n> > gets set depends not only on whether MyWalSnd is claiming to be\n> > synchronous but on how many lower-numbered walsenders are too.\n> > Is that really the right thing?\n> >\n> > But worse than any of that is that the return value seems to be\n> > a list of walsender array indexes, meaning that the callers cannot\n> > use it without making even stronger assumptions about the array\n> > contents not having changed since the start of this function.\n> >\n> > It sort of looks like the design is based on the assumption that\n> > the array contents can't change while SyncRepLock is held ... but\n> > if that's the plan then why bother with the per-walsender spinlocks?\n> > In any case this assumption seems to be failing, suggesting either\n> > that there's a caller that's not holding SyncRepLock when it calls\n> > this function, or that somebody is failing to take that lock while\n> > modifying the array.\n>\n> As far as I read the code, that assumption seems still valid. But the problem\n> is that each walsender updates MyWalSnd->sync_standby_priority at each\n> convenient timing, when SIGHUP is signaled. That is, at a certain moment,\n> some walsenders (also their WalSnd entries in shmem) work based on\n> the latest configuration but the others (also their WalSnd entries) work based\n> on the old one.\n>\n> lowest_priority = SyncRepConfig->nmembers;\n> next_highest_priority = lowest_priority + 1;\n>\n> SyncRepGetSyncStandbysPriority() calculates the lowest priority among\n> all running walsenders as the above, by using the configuration info that\n> this walsender is based on. But this calculated lowest priority would be\n> invalid if other walsender is based on different (e.g., old) configuraiton.\n> This can cause the (other) walsender to have lower priority than\n> the calculated lowest priority and the second loop in\n> SyncRepGetSyncStandbysPriority() to unexpectedly end.\n\nI have the same understanding. Since sync_standby_priroity is\nprotected by SyncRepLock these values of each walsender are not\nchanged through two loops in SyncRepGetSyncStandbysPriority().\nHowever, as Fujii-san already mentioned the true lowest priority can\nbe lower than lowest_priority, nmembers, when only part of walsenders\nreloaded the configuration, which in turn could be the cause of\nleaving entries in the pending list at the end of the function.\n\n> Therefore, the band-aid fix seems to be to set the lowest priority to\n> very large number at the beginning of SyncRepGetSyncStandbysPriority().\n\nI think we can use max_wal_senders. And we can change the second loop\nso that we exit from the function if the pending list is empty.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 31 Mar 2020 23:16:20 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Tue, 31 Mar 2020 at 23:16, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Fri, 27 Mar 2020 at 13:54, Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> >\n> >\n> > On 2020/03/27 10:26, Tom Lane wrote:\n> > > Twice in the past month [1][2], buildfarm member hoverfly has managed\n> > > to reach the \"unreachable\" Assert(false) at the end of\n> > > SyncRepGetSyncStandbysPriority.\n> >\n> > When I search the past discussions, I found that Noah Misch reported\n> > the same issue.\n> > https://www.postgresql.org/message-id/20200206074552.GB3326097@rfd.leadboat.com\n> >\n> > > What seems likely to me, after quickly eyeballing the code, is that\n> > > hoverfly is hitting the blatantly-obvious race condition in that function.\n> > > Namely, that the second loop supposes that the state of the walsender\n> > > array hasn't changed since the first loop.\n> > >\n> > > The minimum fix for this, I suppose, would have the first loop capture\n> > > the sync_standby_priority value for each walsender along with what it's\n> > > already capturing. But I wonder if the whole function shouldn't be\n> > > rewritten from scratch, because it seems like the algorithm is both\n> > > expensively brute-force and unintelligible, which is a sad combination.\n> > > It's likely that the number of walsenders would never be high enough\n> > > that efficiency could matter, but then couldn't we use an algorithm\n> > > that is less complicated and more obviously correct?\n> >\n> > +1 to rewrite the function with better algorithm.\n> >\n> > > (Because the\n> > > alternative conclusion, if you reject the theory that a race is happening,\n> > > is that the algorithm is just flat out buggy; something that's not too\n> > > easy to disprove either.)\n> > >\n> > > Another fairly dubious thing here is that whether or not *am_sync\n> > > gets set depends not only on whether MyWalSnd is claiming to be\n> > > synchronous but on how many lower-numbered walsenders are too.\n> > > Is that really the right thing?\n> > >\n> > > But worse than any of that is that the return value seems to be\n> > > a list of walsender array indexes, meaning that the callers cannot\n> > > use it without making even stronger assumptions about the array\n> > > contents not having changed since the start of this function.\n> > >\n> > > It sort of looks like the design is based on the assumption that\n> > > the array contents can't change while SyncRepLock is held ... but\n> > > if that's the plan then why bother with the per-walsender spinlocks?\n> > > In any case this assumption seems to be failing, suggesting either\n> > > that there's a caller that's not holding SyncRepLock when it calls\n> > > this function, or that somebody is failing to take that lock while\n> > > modifying the array.\n> >\n> > As far as I read the code, that assumption seems still valid. But the problem\n> > is that each walsender updates MyWalSnd->sync_standby_priority at each\n> > convenient timing, when SIGHUP is signaled. That is, at a certain moment,\n> > some walsenders (also their WalSnd entries in shmem) work based on\n> > the latest configuration but the others (also their WalSnd entries) work based\n> > on the old one.\n> >\n> > lowest_priority = SyncRepConfig->nmembers;\n> > next_highest_priority = lowest_priority + 1;\n> >\n> > SyncRepGetSyncStandbysPriority() calculates the lowest priority among\n> > all running walsenders as the above, by using the configuration info that\n> > this walsender is based on. But this calculated lowest priority would be\n> > invalid if other walsender is based on different (e.g., old) configuraiton.\n> > This can cause the (other) walsender to have lower priority than\n> > the calculated lowest priority and the second loop in\n> > SyncRepGetSyncStandbysPriority() to unexpectedly end.\n>\n> I have the same understanding. Since sync_standby_priroity is\n> protected by SyncRepLock these values of each walsender are not\n> changed through two loops in SyncRepGetSyncStandbysPriority().\n> However, as Fujii-san already mentioned the true lowest priority can\n> be lower than lowest_priority, nmembers, when only part of walsenders\n> reloaded the configuration, which in turn could be the cause of\n> leaving entries in the pending list at the end of the function.\n>\n> > Therefore, the band-aid fix seems to be to set the lowest priority to\n> > very large number at the beginning of SyncRepGetSyncStandbysPriority().\n>\n> I think we can use max_wal_senders.\n\nSorry, that's not true. We need another number large enough.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 1 Apr 2020 11:01:22 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> On Tue, 31 Mar 2020 at 23:16, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n>>> Therefore, the band-aid fix seems to be to set the lowest priority to\n>>> very large number at the beginning of SyncRepGetSyncStandbysPriority().\n\n>> I think we can use max_wal_senders.\n\n> Sorry, that's not true. We need another number large enough.\n\nThe buildfarm had another three failures of this type today, so that\nmotivated me to look at it some more. I don't think this code needs\na band-aid fix; I think \"nuke from orbit\" is more nearly the right\nlevel of response.\n\nThe point that I was trying to make originally is that it seems quite\ninsane to imagine that a walsender's sync_standby_priority value is\nsomehow more stable than the very existence of the process. Yet we\nonly require a walsender to lock its own mutex while claiming or\ndisowning its WalSnd entry (by setting or clearing the pid field).\nSo I think it's nuts to define those fields as being protected by\nthe global SyncRepLock.\n\nEven without considering the possibility that a walsender has just\nstarted or stopped, we have the problem Fujii-san described that after\na change in the synchronous_standby_names GUC setting, different\nwalsenders will update their values of sync_standby_priority at\ndifferent instants. (BTW, I now notice that Noah had previously\nidentified this problem at [1].)\n\nThus, even while holding SyncRepLock, we do not have a guarantee that\nwe'll see a consistent set of sync_standby_priority values. In fact\nwe don't even know that the walsnd array entries still belong to the\nprocesses that last set those values. This is what is breaking\nSyncRepGetSyncStandbysPriority, and what it means is that there's\nreally fundamentally no chance of that function producing trustworthy\nresults. The \"band aid\" fixes discussed here might avoid crashing on\nthe Assert, but they won't fix the problems that (a) the result is\npossibly wrong and (b) it can become stale immediately even if it's\nright when returned.\n\nNow, there are only two callers of SyncRepGetSyncStandbys:\nSyncRepGetSyncRecPtr and pg_stat_get_wal_senders. The latter is\nmostly cosmetic (which is a good thing, because to add insult to\ninjury, it continues to use the list after releasing SyncRepLock;\nnot that continuing to hold that lock would make things much safer).\nIf I'm reading the code correctly, the former doesn't really care\nexactly which walsenders are sync standbys: all it cares about is\nto collect their WAL position pointers.\n\nWhat I think we should do about this is, essentially, to get rid of\nSyncRepGetSyncStandbys. Instead, let's have each walsender advertise\nwhether *it* believes that it is a sync standby, based on its last\nevaluation of the relevant GUCs. This would be a bool that it'd\ncompute and set alongside sync_standby_priority. (Hm, maybe we'd not\neven need to have that field anymore? Not sure.) We should also\nredefine that flag, and sync_standby_priority if it survives, as being\nprotected by the per-walsender mutex not SyncRepLock. Then, what\nSyncRepGetSyncRecPtr would do is just sweep through the walsender\narray and collect WAL position pointers from the walsenders that\nclaim to be sync standbys at the instant that it's inspecting them.\npg_stat_get_wal_senders could also use those flags instead of the\nlist from SyncRepGetSyncStandbys.\n\nIt's likely that this definition would have slightly different\nbehavior from the current implementation during the period where\nthe system is absorbing a change in the set of synchronous\nwalsenders. However, since the current implementation is visibly\nwrong during that period anyway, I'm not sure how this could be\nworse. And at least we can be certain that SyncRepGetSyncRecPtr\nwill not include WAL positions from already-dead walsenders in\nits calculations, which *is* a hazard in the code as it stands.\n\nI also estimate that this would be noticeably more efficient than\nthe current code, since the logic to decide who's a sync standby\nwould only run when we're dealing with walsender start/stop or\nSIGHUP, rather than every time SyncRepGetSyncRecPtr runs.\n\nDon't especially want to code this myself, though. Anyone?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n\n\n", "msg_date": "Sat, 11 Apr 2020 18:30:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> > On Tue, 31 Mar 2020 at 23:16, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> >>> Therefore, the band-aid fix seems to be to set the lowest priority to\n> >>> very large number at the beginning of SyncRepGetSyncStandbysPriority().\n> \n> >> I think we can use max_wal_senders.\n> \n> > Sorry, that's not true. We need another number large enough.\n> \n> The buildfarm had another three failures of this type today, so that\n> motivated me to look at it some more. I don't think this code needs\n> a band-aid fix; I think \"nuke from orbit\" is more nearly the right\n> level of response.\n> \n> The point that I was trying to make originally is that it seems quite\n> insane to imagine that a walsender's sync_standby_priority value is\n> somehow more stable than the very existence of the process. Yet we\n> only require a walsender to lock its own mutex while claiming or\n> disowning its WalSnd entry (by setting or clearing the pid field).\n> So I think it's nuts to define those fields as being protected by\n> the global SyncRepLock.\n\nRight. FWIW, furthermore, even SyncRepConfig->syncrep_method can be\ninconsistent among walsenders. I haven't thought that it can be\nrelied on as always consistent and it is enough that it makes a\nconsistent result only while the setting and the set of walsenders is\nstable.\n\n> Even without considering the possibility that a walsender has just\n> started or stopped, we have the problem Fujii-san described that after\n> a change in the synchronous_standby_names GUC setting, different\n> walsenders will update their values of sync_standby_priority at\n> different instants. (BTW, I now notice that Noah had previously\n> identified this problem at [1].)\n> \n> Thus, even while holding SyncRepLock, we do not have a guarantee that\n> we'll see a consistent set of sync_standby_priority values. In fact\n> we don't even know that the walsnd array entries still belong to the\n> processes that last set those values. This is what is breaking\n> SyncRepGetSyncStandbysPriority, and what it means is that there's\n> really fundamentally no chance of that function producing trustworthy\n> results. The \"band aid\" fixes discussed here might avoid crashing on\n> the Assert, but they won't fix the problems that (a) the result is\n> possibly wrong and (b) it can become stale immediately even if it's\n> right when returned.\n\nAgreed. And I thought that it's not a problem if we had wrong result\ntemporarily. And the unstability persists for the standby-reply\ninterval at most (unless the next cause of instability comes).\n\n> Now, there are only two callers of SyncRepGetSyncStandbys:\n> SyncRepGetSyncRecPtr and pg_stat_get_wal_senders. The latter is\n> mostly cosmetic (which is a good thing, because to add insult to\n> injury, it continues to use the list after releasing SyncRepLock;\n> not that continuing to hold that lock would make things much safer).\n> If I'm reading the code correctly, the former doesn't really care\n> exactly which walsenders are sync standbys: all it cares about is\n> to collect their WAL position pointers.\n\nAgreed. To find the sync standby with the largest delay.\n\n> What I think we should do about this is, essentially, to get rid of\n> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n> whether *it* believes that it is a sync standby, based on its last\n> evaluation of the relevant GUCs. This would be a bool that it'd\n> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n\nMmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\nwalsender thinks it is at, among synchronous_standby_names. Then to\ndecide \"I am a sync standby\" we need to know how many walsenders with\nhigher priority are alive now. SyncRepGetSyncStandbyPriority does the\njudgment now and suffers from the inconsistency of priority values.\n\nIn short, it seems to me like moving the problem into another\nplace. But I think that there might be a smarter way to find \"I am\nsync\".\n\n> even need to have that field anymore? Not sure.) We should also\n> redefine that flag, and sync_standby_priority if it survives, as being\n> protected by the per-walsender mutex not SyncRepLock. Then, what\n> SyncRepGetSyncRecPtr would do is just sweep through the walsender\n> array and collect WAL position pointers from the walsenders that\n> claim to be sync standbys at the instant that it's inspecting them.\n> pg_stat_get_wal_senders could also use those flags instead of the\n> list from SyncRepGetSyncStandbys.\n> \n> It's likely that this definition would have slightly different\n> behavior from the current implementation during the period where\n> the system is absorbing a change in the set of synchronous\n> walsenders. However, since the current implementation is visibly\n> wrong during that period anyway, I'm not sure how this could be\n> worse. And at least we can be certain that SyncRepGetSyncRecPtr\n> will not include WAL positions from already-dead walsenders in\n> its calculations, which *is* a hazard in the code as it stands.\n> \n> I also estimate that this would be noticeably more efficient than\n> the current code, since the logic to decide who's a sync standby\n> would only run when we're dealing with walsender start/stop or\n> SIGHUP, rather than every time SyncRepGetSyncRecPtr runs.\n> \n> Don't especially want to code this myself, though. Anyone?\n> \n> \t\t\tregards, tom lane\n> \n> [1] https://www.postgresql.org/message-id/flat/20200206074552.GB3326097%40rfd.leadboat.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Apr 2020 15:31:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Mon, 13 Apr 2020 15:31:01 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > The point that I was trying to make originally is that it seems quite\n> > insane to imagine that a walsender's sync_standby_priority value is\n> > somehow more stable than the very existence of the process. Yet we\n> > only require a walsender to lock its own mutex while claiming or\n> > disowning its WalSnd entry (by setting or clearing the pid field).\n> > So I think it's nuts to define those fields as being protected by\n> > the global SyncRepLock.\n> \n> Right. FWIW, furthermore, even SyncRepConfig->syncrep_method can be\n> inconsistent among walsenders. I haven't thought that it can be\n> relied on as always consistent and it is enough that it makes a\n> consistent result only while the setting and the set of walsenders is\n> stable.\n\nYes, the sentene \"and (I haven't thought that) it is enough ..\" is a\nmistake of \"and I have thought that it is enough that..\".\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Apr 2020 15:34:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n>> What I think we should do about this is, essentially, to get rid of\n>> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n>> whether *it* believes that it is a sync standby, based on its last\n>> evaluation of the relevant GUCs. This would be a bool that it'd\n>> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n\n> Mmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\n> walsender thinks it is at, among synchronous_standby_names. Then to\n> decide \"I am a sync standby\" we need to know how many walsenders with\n> higher priority are alive now. SyncRepGetSyncStandbyPriority does the\n> judgment now and suffers from the inconsistency of priority values.\n\nYeah. After looking a bit closer, I think that the current definition\nof sync_standby_priority (that is, as the result of local evaluation\nof SyncRepGetStandbyPriority()) is OK. The problem is what we're doing\nwith it. I suggest that what we should do in SyncRepGetSyncRecPtr()\nis make one sweep across the WalSnd array, collecting PID,\nsync_standby_priority, *and* the WAL pointers from each valid entry.\nThen examine that data and decide which WAL value we need, without assuming\nthat the sync_standby_priority values are necessarily totally consistent.\nBut in any case we must examine each entry just once while holding its\nmutex, not go back to it later expecting it to still be the same.\n\nAnother thing that I'm finding interesting is that I now see this is\nnot at all new code. It doesn't look like SyncRepGetSyncStandbysPriority\nhas changed much since 2016. So how come we didn't detect this problem\nlong ago? I searched the buildfarm logs for assertion failures in\nsyncrep.c, looking back one year, and here's what I found:\n\n sysname | branch | snapshot | stage | l \n------------+---------------+---------------------+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------\n nightjar | REL_10_STABLE | 2019-08-13 23:04:41 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"/pgbuild/root/REL_10_STABLE/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 940)\n hoverfly | REL9_6_STABLE | 2019-11-07 17:19:12 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n hoverfly | HEAD | 2019-11-22 12:15:08 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n francolin | HEAD | 2020-01-16 23:10:06 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"/home/andres/build/buildfarm-francolin/HEAD/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 951)\n hoverfly | REL_11_STABLE | 2020-02-29 01:34:55 | recoveryCheck | TRAP: FailedAssertion(\"!(0)\", File: \"syncrep.c\", Line: 946)\n hoverfly | REL9_6_STABLE | 2020-03-26 13:51:15 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n hoverfly | REL9_6_STABLE | 2020-04-07 21:52:07 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n curculio | HEAD | 2020-04-11 18:30:21 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n sidewinder | HEAD | 2020-04-11 18:45:39 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n curculio | HEAD | 2020-04-11 20:30:26 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n sidewinder | HEAD | 2020-04-11 21:45:48 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n sidewinder | HEAD | 2020-04-13 10:45:35 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n conchuela | HEAD | 2020-04-13 16:00:18 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"/home/pgbf/buildroot/HEAD/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 951)\n sidewinder | HEAD | 2020-04-13 18:45:34 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n(14 rows)\n\nThe line numbers vary in the back branches, but all of these crashes are\nat that same Assert. So (a) yes, this does happen in the back branches,\nbut (b) some fairly recent change has made it a whole lot more probable.\nNeither syncrep.c nor 007_sync_rep.pl have changed much in some time,\nso whatever the change was was indirect. Curious. Is it just timing?\n\nI'm giving the side-eye to Noah's recent changes 328c70997 and 421685812,\nbut this isn't enough evidence to say definitely that that's what boosted\nthe failure rate.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 13 Apr 2020 21:34:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Tue, 14 Apr 2020 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> >> What I think we should do about this is, essentially, to get rid of\n> >> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n> >> whether *it* believes that it is a sync standby, based on its last\n> >> evaluation of the relevant GUCs. This would be a bool that it'd\n> >> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n>\n> > Mmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\n> > walsender thinks it is at, among synchronous_standby_names. Then to\n> > decide \"I am a sync standby\" we need to know how many walsenders with\n> > higher priority are alive now. SyncRepGetSyncStandbyPriority does the\n> > judgment now and suffers from the inconsistency of priority values.\n>\n> Yeah. After looking a bit closer, I think that the current definition\n> of sync_standby_priority (that is, as the result of local evaluation\n> of SyncRepGetStandbyPriority()) is OK. The problem is what we're doing\n> with it. I suggest that what we should do in SyncRepGetSyncRecPtr()\n> is make one sweep across the WalSnd array, collecting PID,\n> sync_standby_priority, *and* the WAL pointers from each valid entry.\n> Then examine that data and decide which WAL value we need, without assuming\n> that the sync_standby_priority values are necessarily totally consistent.\n> But in any case we must examine each entry just once while holding its\n> mutex, not go back to it later expecting it to still be the same.\n\nCan we have a similar approach of sync_standby_defined for\nsync_standby_priority? That is, checkpionter is responsible for\nchanging sync_standby_priority of all walsenders when SIGHUP. That\nway, all walsenders can see a consistent view of\nsync_standby_priority. And when a walsender starts, it sets\nsync_standby_priority by itself. The logic to decide who's a sync\nstandby doesn't change. SyncRepGetSyncRecPtr() gets all walsenders\nhaving higher priority along with their WAL positions.\n\n>\n> Another thing that I'm finding interesting is that I now see this is\n> not at all new code. It doesn't look like SyncRepGetSyncStandbysPriority\n> has changed much since 2016. So how come we didn't detect this problem\n> long ago? I searched the buildfarm logs for assertion failures in\n> syncrep.c, looking back one year, and here's what I found:\n>\n> sysname | branch | snapshot | stage | l\n> ------------+---------------+---------------------+---------------+-------------------------------------------------------------------------------------------------------------------------------------------------------\n> nightjar | REL_10_STABLE | 2019-08-13 23:04:41 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"/pgbuild/root/REL_10_STABLE/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 940)\n> hoverfly | REL9_6_STABLE | 2019-11-07 17:19:12 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n> hoverfly | HEAD | 2019-11-22 12:15:08 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> francolin | HEAD | 2020-01-16 23:10:06 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"/home/andres/build/buildfarm-francolin/HEAD/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 951)\n> hoverfly | REL_11_STABLE | 2020-02-29 01:34:55 | recoveryCheck | TRAP: FailedAssertion(\"!(0)\", File: \"syncrep.c\", Line: 946)\n> hoverfly | REL9_6_STABLE | 2020-03-26 13:51:15 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n> hoverfly | REL9_6_STABLE | 2020-04-07 21:52:07 | recoveryCheck | TRAP: FailedAssertion(\"!(((bool) 0))\", File: \"syncrep.c\", Line: 723)\n> curculio | HEAD | 2020-04-11 18:30:21 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> sidewinder | HEAD | 2020-04-11 18:45:39 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> curculio | HEAD | 2020-04-11 20:30:26 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> sidewinder | HEAD | 2020-04-11 21:45:48 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> sidewinder | HEAD | 2020-04-13 10:45:35 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> conchuela | HEAD | 2020-04-13 16:00:18 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"/home/pgbf/buildroot/HEAD/pgsql.build/../pgsql/src/backend/replication/syncrep.c\", Line: 951)\n> sidewinder | HEAD | 2020-04-13 18:45:34 | recoveryCheck | TRAP: FailedAssertion(\"false\", File: \"syncrep.c\", Line: 951)\n> (14 rows)\n>\n> The line numbers vary in the back branches, but all of these crashes are\n> at that same Assert. So (a) yes, this does happen in the back branches,\n> but (b) some fairly recent change has made it a whole lot more probable.\n> Neither syncrep.c nor 007_sync_rep.pl have changed much in some time,\n> so whatever the change was was indirect. Curious. Is it just timing?\n\nInteresting. It's happening on certain animals, not all. Especially\ntests with HEAD on sidewinder and curculio, which are NetBSD 7 and\nOpenBSD 5.9 respectively, started to fail at a high rate since a\ncouple of days ago.\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 13:06:14 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Tue, 14 Apr 2020 13:06:14 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Tue, 14 Apr 2020 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > >> What I think we should do about this is, essentially, to get rid of\n> > >> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n> > >> whether *it* believes that it is a sync standby, based on its last\n> > >> evaluation of the relevant GUCs. This would be a bool that it'd\n> > >> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n> >\n> > > Mmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\n> > > walsender thinks it is at, among synchronous_standby_names. Then to\n> > > decide \"I am a sync standby\" we need to know how many walsenders with\n> > > higher priority are alive now. SyncRepGetSyncStandbyPriority does the\n> > > judgment now and suffers from the inconsistency of priority values.\n> >\n> > Yeah. After looking a bit closer, I think that the current definition\n> > of sync_standby_priority (that is, as the result of local evaluation\n> > of SyncRepGetStandbyPriority()) is OK. The problem is what we're doing\n> > with it. I suggest that what we should do in SyncRepGetSyncRecPtr()\n> > is make one sweep across the WalSnd array, collecting PID,\n> > sync_standby_priority, *and* the WAL pointers from each valid entry.\n> > Then examine that data and decide which WAL value we need, without assuming\n> > that the sync_standby_priority values are necessarily totally consistent.\n> > But in any case we must examine each entry just once while holding its\n> > mutex, not go back to it later expecting it to still be the same.\n\nSyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\nsync_standby_priority of any walsender can be changed while the\nfunction is scanning welsenders. The issue is we already have\ninconsistent walsender information before we enter the function. Thus\nhow many times we scan on the array doesn't make any difference.\n\nI think we need to do one of the followings.\n\n A) prevent SyncRepGetSyncStandbysPriority from being entered while\n walsender priority is inconsistent.\n\n B) make SyncRepGetSyncStandbysPriority be tolerant of priority\n inconsistency.\n\n C) protect walsender priority array from beinig inconsistent.\n\nThe (B) is the band aids. To achieve A we need to central controller\nof priority config handling. C is:\n\n> Can we have a similar approach of sync_standby_defined for\n> sync_standby_priority? That is, checkpionter is responsible for\n> changing sync_standby_priority of all walsenders when SIGHUP. That\n> way, all walsenders can see a consistent view of\n> sync_standby_priority. And when a walsender starts, it sets\n> sync_standby_priority by itself. The logic to decide who's a sync\n> standby doesn't change. SyncRepGetSyncRecPtr() gets all walsenders\n> having higher priority along with their WAL positions.\n\nYeah, it works if we do , but the problem of that way is that to\ndetermin priority of walsenders, we need to know what walsenders are\nrunning. That is, when new walsender comes the process needs to aware\nof the arrival (or leaving) right away and reassign the priority of\nevery wal senders again.\n\nIf we accept to share variable-length information among processes,\nsharing sync_standby_names or parsed SyncRepConfigData among processes\nwould work.\n\n\n> >\n> > Another thing that I'm finding interesting is that I now see this is\n> > not at all new code. It doesn't look like SyncRepGetSyncStandbysPriority\n> > has changed much since 2016. So how come we didn't detect this problem\n> > long ago? I searched the buildfarm logs for assertion failures in\n> > syncrep.c, looking back one year, and here's what I found:\n...\n> > The line numbers vary in the back branches, but all of these crashes are\n> > at that same Assert. So (a) yes, this does happen in the back branches,\n> > but (b) some fairly recent change has made it a whole lot more probable.\n> > Neither syncrep.c nor 007_sync_rep.pl have changed much in some time,\n> > so whatever the change was was indirect. Curious. Is it just timing?\n> \n> Interesting. It's happening on certain animals, not all. Especially\n> tests with HEAD on sidewinder and curculio, which are NetBSD 7 and\n> OpenBSD 5.9 respectively, started to fail at a high rate since a\n> couple of days ago.\n\nCoundn't this align the timing of config reloading? (I didn't checked\nanything yet.)\n\n| commit 421685812290406daea58b78dfab0346eb683bbb\n| Author: Noah Misch <noah@leadboat.com>\n| Date: Sat Apr 11 10:30:00 2020 -0700\n| \n| When WalSndCaughtUp, sleep only in WalSndWaitForWal().\n| Before sleeping, WalSndWaitForWal() sends a keepalive if MyWalSnd->write\n| < sentPtr. That is important in logical replication. When the latest\n| physical LSN yields no logical replication messages (a common case),\n| that keepalive elicits a reply, and processing the reply updates\n| pg_stat_replication.replay_lsn. WalSndLoop() lacks that; when\n| WalSndLoop() slept, replay_lsn advancement could stall until\n| wal_receiver_status_interval elapsed. This sometimes stalled\n| src/test/subscription/t/001_rep_changes.pl for up to 10s.\n| \n| Discussion: https://postgr.es/m/20200406063649.GA3738151@rfd.leadboat.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Apr 2020 18:35:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> SyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\n> sync_standby_priority of any walsender can be changed while the\n> function is scanning welsenders. The issue is we already have\n> inconsistent walsender information before we enter the function. Thus\n> how many times we scan on the array doesn't make any difference.\n\n*Yes it does*. The existing code can deliver entirely broken results\nif some walsender exits between where we examine the priorities and\nwhere we fetch the WAL pointers. While that doesn't seem to be the\nexact issue we're seeing in the buildfarm, it's still another obvious\nbug in this code. I will not accept a \"fix\" that doesn't fix that.\n\n> I think we need to do one of the followings.\n\n> A) prevent SyncRepGetSyncStandbysPriority from being entered while\n> walsender priority is inconsistent.\n> B) make SyncRepGetSyncStandbysPriority be tolerant of priority\n> inconsistency.\n> C) protect walsender priority array from beinig inconsistent.\n\n(B) seems like the only practical solution from here. We could\nprobably arrange for synchronous update of the priorities when\nthey change in response to a GUC change, but it doesn't seem to\nme to be practical to do that in response to walsender exit.\nYou'd end up finding that an unexpected walsender exit results\nin panic'ing the system, which is no better than where we are now.\n\nIt doesn't seem to me to be that hard to implement the desired\nsemantics for synchronous_standby_names with inconsistent info.\nIn FIRST mode you basically just need to take the N smallest\npriorities you see in the array, but without assuming there are no\nduplicates or holes. It might be a good idea to include ties at the\nend, that is if you see 1,2,2,4 or 1,3,3,4 and you want 2 sync\nstandbys, include the first three of them in the calculation until\nthe inconsistency is resolved. In ANY mode I don't see that\ninconsistent priorities matter at all.\n\n> If we accept to share variable-length information among processes,\n> sharing sync_standby_names or parsed SyncRepConfigData among processes\n> would work.\n\nNot sure that we really need more than what's being shared now,\nie each process's last-known index in the sync_standby_names list.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Apr 2020 09:52:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "I wrote:\n> It doesn't seem to me to be that hard to implement the desired\n> semantics for synchronous_standby_names with inconsistent info.\n> In FIRST mode you basically just need to take the N smallest\n> priorities you see in the array, but without assuming there are no\n> duplicates or holes. It might be a good idea to include ties at the\n> end, that is if you see 1,2,2,4 or 1,3,3,4 and you want 2 sync\n> standbys, include the first three of them in the calculation until\n> the inconsistency is resolved. In ANY mode I don't see that\n> inconsistent priorities matter at all.\n\nConcretely, I think we ought to do the attached, or something pretty\nclose to it.\n\nI'm not really happy about breaking ties based on walsnd_index,\nbut I see that there are several TAP test cases that fail if we\ndo something else. I'm inclined to think those tests are bogus ...\nbut I won't argue to change them right now.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 14 Apr 2020 16:32:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Tue, 14 Apr 2020 09:52:42 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > SyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\n> > sync_standby_priority of any walsender can be changed while the\n> > function is scanning welsenders. The issue is we already have\n> > inconsistent walsender information before we enter the function. Thus\n> > how many times we scan on the array doesn't make any difference.\n> \n> *Yes it does*. The existing code can deliver entirely broken results\n> if some walsender exits between where we examine the priorities and\n> where we fetch the WAL pointers. While that doesn't seem to be the\n\nAh. I didn't take that as inconsistency. Actually walsender exit\ninactivate the corresponding slot by setting pid = 0. In a bad case\n(as you mentioned upthread) the entry can be occupied by another wal\nsender. However, sync_standby_priority cannot be updated until the\nwhole work is finished.\n\n> exact issue we're seeing in the buildfarm, it's still another obvious\n> bug in this code. I will not accept a \"fix\" that doesn't fix that.\n\nI think that the \"inconsistency\" that can be observed in a process is\ndisagreement between SyncRepConfig->nmembers and\n<each_walsnd_entry>->sync_standby_priority. If any one of walsenders\nregards its priority as lower (larger in value) than nmembers in the\n\"current\" process, the assertion fires. If that is the issue, the\nissue is not dynamic inconsistency.\n\n# It's the assumption of my band-aid.\n\n> > I think we need to do one of the followings.\n> \n> > A) prevent SyncRepGetSyncStandbysPriority from being entered while\n> > walsender priority is inconsistent.\n> > B) make SyncRepGetSyncStandbysPriority be tolerant of priority\n> > inconsistency.\n> > C) protect walsender priority array from beinig inconsistent.\n> \n> (B) seems like the only practical solution from here. We could\n> probably arrange for synchronous update of the priorities when\n> they change in response to a GUC change, but it doesn't seem to\n> me to be practical to do that in response to walsender exit.\n> You'd end up finding that an unexpected walsender exit results\n> in panic'ing the system, which is no better than where we are now.\n\nI agree to you as a whole. I thought of several ways to keep the\nconsistency of the array but all of them looked too much.\n\n> It doesn't seem to me to be that hard to implement the desired\n> semantics for synchronous_standby_names with inconsistent info.\n> In FIRST mode you basically just need to take the N smallest\n> priorities you see in the array, but without assuming there are no\n> duplicates or holes. It might be a good idea to include ties at the\n> end, that is if you see 1,2,2,4 or 1,3,3,4 and you want 2 sync\n> standbys, include the first three of them in the calculation until\n> the inconsistency is resolved. In ANY mode I don't see that\n> inconsistent priorities matter at all.\n\nMmm, the priority lists like 1,2,2,4 are not thought as inconsistency\nat all in the context of walsender priority. That happenes stablly if\nany two or more walreceivers reported the same application_name. I\nbelieve the existing code is already taking that case into\nconsideration.\n\n> > If we accept to share variable-length information among processes,\n> > sharing sync_standby_names or parsed SyncRepConfigData among processes\n> > would work.\n> \n> Not sure that we really need more than what's being shared now,\n> ie each process's last-known index in the sync_standby_names list.\n\nIf we take the (B), we don't need anymore. (A) and (C) would need\nmore.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Apr 2020 10:21:27 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Tue, 14 Apr 2020 16:32:40 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> I wrote:\n> > It doesn't seem to me to be that hard to implement the desired\n> > semantics for synchronous_standby_names with inconsistent info.\n> > In FIRST mode you basically just need to take the N smallest\n> > priorities you see in the array, but without assuming there are no\n> > duplicates or holes. It might be a good idea to include ties at the\n> > end, that is if you see 1,2,2,4 or 1,3,3,4 and you want 2 sync\n> > standbys, include the first three of them in the calculation until\n> > the inconsistency is resolved. In ANY mode I don't see that\n> > inconsistent priorities matter at all.\n> \n> Concretely, I think we ought to do the attached, or something pretty\n> close to it.\n\nLooking SyncRepGetSyncStandbys, I agree that it's good not assuming\nlowest_priority, which I thought as the culprit of the assertion\nfailure. The current code intends to use less memory. I don't think\nthere is a case where only 3 out of 1000 standbys are required to be\nsync-standby so collecting all wal senders then sorting them seems\nreasonable strategy. The new code looks clearer.\n\n+\t\tstby->is_sync_standby = true;\t/* might change below */\n\nI'm uneasy with that. In quorum mode all running standbys are marked\nas \"sync\" and that's bogus.\n\nThe only users of the flag seems to be:\n\nSyncRepGetSyncRecPtr:\n+\t\t\t*am_sync = sync_standbys[i].is_sync_standby;\n\nand\n\nSyncRepGetOldestSyncRecPtr:\n+\t\t/* Ignore candidates that aren't considered synchronous */\n+\t\tif (!sync_standbys[i].is_sync_standby)\n+\t\t\tcontinue;\n\nOn the other hand sync_standbys is already sorted in priority order so I think we can get rid of the member by setting *am_sync as the follows.\n\n\nSyncRepGetSyncRecPtr:\n if (sync_standbys[i].is_me)\n {\n *am_sync = (i < SyncRepConfig->num_sync);\n break;\n }\n\nAnd the second user can be as the follows.\n\nSyncRepGetOldestSyncRecPtr:\n /* Ignore candidates that aren't considered synchronous */\n if (i >= SyncRepConfig->num_sync)\n break;\n\n> I'm not really happy about breaking ties based on walsnd_index,\n> but I see that there are several TAP test cases that fail if we\n> do something else. I'm inclined to think those tests are bogus ...\n> but I won't argue to change them right now.\n\nAgreed about the tie-breaker.\n\nI'm looking this more closer.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:35:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Tue, Apr 14, 2020 at 04:32:40PM -0400, Tom Lane wrote:\n> I wrote:\n> > It doesn't seem to me to be that hard to implement the desired\n> > semantics for synchronous_standby_names with inconsistent info.\n> > In FIRST mode you basically just need to take the N smallest\n> > priorities you see in the array, but without assuming there are no\n> > duplicates or holes. It might be a good idea to include ties at the\n> > end, that is if you see 1,2,2,4 or 1,3,3,4 and you want 2 sync\n> > standbys, include the first three of them in the calculation until\n> > the inconsistency is resolved. In ANY mode I don't see that\n> > inconsistent priorities matter at all.\n> \n> Concretely, I think we ought to do the attached, or something pretty\n> close to it.\n> \n> I'm not really happy about breaking ties based on walsnd_index,\n> but I see that there are several TAP test cases that fail if we\n> do something else. I'm inclined to think those tests are bogus ...\n> but I won't argue to change them right now.\n\nThis passes the test battery I wrote in preparation for the 2020-02 thread.\n\n\n", "msg_date": "Tue, 14 Apr 2020 20:14:02 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Tue, 14 Apr 2020 at 18:35, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Tue, 14 Apr 2020 13:06:14 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > On Tue, 14 Apr 2020 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > > At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > > >> What I think we should do about this is, essentially, to get rid of\n> > > >> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n> > > >> whether *it* believes that it is a sync standby, based on its last\n> > > >> evaluation of the relevant GUCs. This would be a bool that it'd\n> > > >> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n> > >\n> > > > Mmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\n> > > > walsender thinks it is at, among synchronous_standby_names. Then to\n> > > > decide \"I am a sync standby\" we need to know how many walsenders with\n> > > > higher priority are alive now. SyncRepGetSyncStandbyPriority does the\n> > > > judgment now and suffers from the inconsistency of priority values.\n> > >\n> > > Yeah. After looking a bit closer, I think that the current definition\n> > > of sync_standby_priority (that is, as the result of local evaluation\n> > > of SyncRepGetStandbyPriority()) is OK. The problem is what we're doing\n> > > with it. I suggest that what we should do in SyncRepGetSyncRecPtr()\n> > > is make one sweep across the WalSnd array, collecting PID,\n> > > sync_standby_priority, *and* the WAL pointers from each valid entry.\n> > > Then examine that data and decide which WAL value we need, without assuming\n> > > that the sync_standby_priority values are necessarily totally consistent.\n> > > But in any case we must examine each entry just once while holding its\n> > > mutex, not go back to it later expecting it to still be the same.\n>\n> SyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\n> sync_standby_priority of any walsender can be changed while the\n> function is scanning welsenders. The issue is we already have\n> inconsistent walsender information before we enter the function. Thus\n> how many times we scan on the array doesn't make any difference.\n>\n> I think we need to do one of the followings.\n>\n> A) prevent SyncRepGetSyncStandbysPriority from being entered while\n> walsender priority is inconsistent.\n>\n> B) make SyncRepGetSyncStandbysPriority be tolerant of priority\n> inconsistency.\n>\n> C) protect walsender priority array from beinig inconsistent.\n>\n> The (B) is the band aids. To achieve A we need to central controller\n> of priority config handling. C is:\n>\n> > Can we have a similar approach of sync_standby_defined for\n> > sync_standby_priority? That is, checkpionter is responsible for\n> > changing sync_standby_priority of all walsenders when SIGHUP. That\n> > way, all walsenders can see a consistent view of\n> > sync_standby_priority. And when a walsender starts, it sets\n> > sync_standby_priority by itself. The logic to decide who's a sync\n> > standby doesn't change. SyncRepGetSyncRecPtr() gets all walsenders\n> > having higher priority along with their WAL positions.\n>\n> Yeah, it works if we do , but the problem of that way is that to\n> determin priority of walsenders, we need to know what walsenders are\n> running. That is, when new walsender comes the process needs to aware\n> of the arrival (or leaving) right away and reassign the priority of\n> every wal senders again.\n\nI think we don't need to reassign the priority when new walsender\ncomes or leaves. IIUC The priority is calculated based on only\nsynchronous_standby_names. Coming or leaving a walsender doesn't\naffect other's priorities.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Wed, 15 Apr 2020 13:01:02 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Wed, 15 Apr 2020 11:35:58 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> I'm looking this more closer.\n\nIt looks to be in the right direction to me.\n\nAs mentioned in the previous mail, I removed is_sync_standby from\nSycnStandbyData. But just doing that breaks pg_stat_get_wal_senders.\nIt is an exsting issue but the logic for sync_state (values[10]) looks\nodd. Fixed in the attached.\n\nSyncRepInitConfig uses mutex instead of SyncRepLock. Since anyway the\nintegrity of sync_standby_priority is not guaranteed, it seems OK to\nme. It seems fine to remove the assertion and requirement about\nSyncRepLock from SyncRepGetSyncRecPtr for the same reason. (Actually\nthe lock is held, though.) \n\nSyncRepGetSyncStandbysPriority doesn't seem worth existing as a\nfunction. Removed in the attached.\n\n+\tnum_standbys = SyncRepGetSyncStandbys(&sync_standbys);\n\nThe list is no longer consists only of synchronous standbys. I\nchanged the function name, variable name and tried to adjust related\ncomments.\n\nIt's not what the patch did, but I don't understand why\nSyncRepGetNthLatestSyncRecPtr takes SyncRepConfig->num_sync but\nSyncRepGetOldest.. accesses it directly. Changed the function\n*Oldest* in the attached. I didn't do that but finally, the two\nfunctions can be consolidated, just by moving the selection logic\ncurrently in SyncRepGetSyncRecPtr into the new function.\n\n\nThe resulting patch is attached.\n\n- removed is_sync_standby from SyncRepStandbyData\n- Fixed the logic for values[10] in pg_stat_get_wal_senders\n- Changed the signature of SyncRepGetOldestSyncRecPtr\n- Adjusted some comments to the behavioral change of\n SyncRepGet(Sync)Standbys.\n\nregards.\n\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Wed, 15 Apr 2020 16:26:50 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Wed, 15 Apr 2020 13:01:02 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Tue, 14 Apr 2020 at 18:35, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> >\n> > At Tue, 14 Apr 2020 13:06:14 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in\n> > > On Tue, 14 Apr 2020 at 10:34, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >\n> > > > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > > > At Sat, 11 Apr 2020 18:30:30 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > > > >> What I think we should do about this is, essentially, to get rid of\n> > > > >> SyncRepGetSyncStandbys. Instead, let's have each walsender advertise\n> > > > >> whether *it* believes that it is a sync standby, based on its last\n> > > > >> evaluation of the relevant GUCs. This would be a bool that it'd\n> > > > >> compute and set alongside sync_standby_priority. (Hm, maybe we'd not\n> > > >\n> > > > > Mmm.. SyncRepGetStandbyPriority returns the \"priority\" that a\n> > > > > walsender thinks it is at, among synchronous_standby_names. Then to\n> > > > > decide \"I am a sync standby\" we need to know how many walsenders with\n> > > > > higher priority are alive now. SyncRepGetSyncStandbyPriority does the\n> > > > > judgment now and suffers from the inconsistency of priority values.\n> > > >\n> > > > Yeah. After looking a bit closer, I think that the current definition\n> > > > of sync_standby_priority (that is, as the result of local evaluation\n> > > > of SyncRepGetStandbyPriority()) is OK. The problem is what we're doing\n> > > > with it. I suggest that what we should do in SyncRepGetSyncRecPtr()\n> > > > is make one sweep across the WalSnd array, collecting PID,\n> > > > sync_standby_priority, *and* the WAL pointers from each valid entry.\n> > > > Then examine that data and decide which WAL value we need, without assuming\n> > > > that the sync_standby_priority values are necessarily totally consistent.\n> > > > But in any case we must examine each entry just once while holding its\n> > > > mutex, not go back to it later expecting it to still be the same.\n> >\n> > SyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\n> > sync_standby_priority of any walsender can be changed while the\n> > function is scanning welsenders. The issue is we already have\n> > inconsistent walsender information before we enter the function. Thus\n> > how many times we scan on the array doesn't make any difference.\n> >\n> > I think we need to do one of the followings.\n> >\n> > A) prevent SyncRepGetSyncStandbysPriority from being entered while\n> > walsender priority is inconsistent.\n> >\n> > B) make SyncRepGetSyncStandbysPriority be tolerant of priority\n> > inconsistency.\n> >\n> > C) protect walsender priority array from beinig inconsistent.\n> >\n> > The (B) is the band aids. To achieve A we need to central controller\n> > of priority config handling. C is:\n> >\n> > > Can we have a similar approach of sync_standby_defined for\n> > > sync_standby_priority? That is, checkpionter is responsible for\n> > > changing sync_standby_priority of all walsenders when SIGHUP. That\n> > > way, all walsenders can see a consistent view of\n> > > sync_standby_priority. And when a walsender starts, it sets\n> > > sync_standby_priority by itself. The logic to decide who's a sync\n> > > standby doesn't change. SyncRepGetSyncRecPtr() gets all walsenders\n> > > having higher priority along with their WAL positions.\n> >\n> > Yeah, it works if we do , but the problem of that way is that to\n> > determin priority of walsenders, we need to know what walsenders are\n> > running. That is, when new walsender comes the process needs to aware\n> > of the arrival (or leaving) right away and reassign the priority of\n> > every wal senders again.\n> \n> I think we don't need to reassign the priority when new walsender\n> comes or leaves. IIUC The priority is calculated based on only\n> synchronous_standby_names. Coming or leaving a walsender doesn't\n> affect other's priorities.\n\nSorry, the \"priority\" in this area is a bit confusing. The \"priority\"\ndefined by synchronous_standby_names is determined in isolation from\nthe presence of walsenders. The \"priority\" in\nwalsnd->sync_standby_priority needs walsender presence to determine.\nI thought of the latter in the discussion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 15 Apr 2020 16:33:30 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Tue, 14 Apr 2020 16:32:40 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> +\t\tstby->is_sync_standby = true;\t/* might change below */\n\n> I'm uneasy with that. In quorum mode all running standbys are marked\n> as \"sync\" and that's bogus.\n\nI don't follow that? The existing coding of SyncRepGetSyncStandbysQuorum\nreturns all the candidates in its list, so this is isomorphic to that.\n\nPossibly a different name for the flag would be more suited?\n\n> On the other hand sync_standbys is already sorted in priority order so I think we can get rid of the member by setting *am_sync as the follows.\n\n> SyncRepGetSyncRecPtr:\n> if (sync_standbys[i].is_me)\n> {\n> *am_sync = (i < SyncRepConfig->num_sync);\n> break;\n> }\n\nI disagree with this, it will change the behavior in the quorum case.\n\nIn any case, a change like this will cause callers to know way more than\nthey ought to about the ordering of the array. In my mind, the fact that\nSyncRepGetSyncStandbysPriority is sorting the array is an internal\nimplementation detail; I do not want it to be part of the API.\n\n(Apropos to that, I realized from working on this patch that there's\nanother, completely undocumented assumption in the existing code, that\nthe integer list will be sorted by walsender index for equal priorities.\nI don't like that either, and not just because it's undocumented.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 15 Apr 2020 11:31:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Wed, 15 Apr 2020 11:31:49 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Tue, 14 Apr 2020 16:32:40 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> > +\t\tstby->is_sync_standby = true;\t/* might change below */\n> \n> > I'm uneasy with that. In quorum mode all running standbys are marked\n> > as \"sync\" and that's bogus.\n> \n> I don't follow that? The existing coding of SyncRepGetSyncStandbysQuorum\n> returns all the candidates in its list, so this is isomorphic to that.\n\nThe existing code actully does that. On the other hand\nSyncRepGetSyncStandbysPriority returns standbys that *are known to be*\nsynchronous, but *Quorum returns standbys that *can be* synchronous.\nWhat the two functions return are different from each other. So it\nshould be is_sync_standby for -Priority and is_sync_candidate for\n-Quorum.\n\n> Possibly a different name for the flag would be more suited?\n> \n> > On the other hand sync_standbys is already sorted in priority order so I think we can get rid of the member by setting *am_sync as the follows.\n> \n> > SyncRepGetSyncRecPtr:\n> > if (sync_standbys[i].is_me)\n> > {\n> > *am_sync = (i < SyncRepConfig->num_sync);\n> > break;\n> > }\n> \n> I disagree with this, it will change the behavior in the quorum case.\n\nOops, you're right. I find the whole thing there (and me) is a bit\nconfusing. syncrep_method affects how some values (specifically\nam_sync and sync_standbys) are translated at several calling depths.\nAnd the *am_sync informs nothing in quorum mode.\n\n> In any case, a change like this will cause callers to know way more than\n> they ought to about the ordering of the array. In my mind, the fact that\n> SyncRepGetSyncStandbysPriority is sorting the array is an internal\n> implementation detail; I do not want it to be part of the API.\n\nAnyway the am_sync and is_sync_standby is utterly useless in quorum\nmode. That discussion is pretty persuasive if not, but actually the\nupper layers (SyncRepReleaseWaiters and SyncRepGetSyncRecPtr) referes\nto syncrep_method to differentiate the interpretation of the am_sync\nflag and sync_standbys list. So anyway the difference is actually a\npart of API.\n\nAfter thinking some more, I concluded that some of the variables are\nwrongly named or considered, and redundant. The fucntion of am_sync is\ncovered by got_recptr in SyncRepReleaseWaiters, so it's enough that\nSyncRepGetSyncRecPtr just reports to the caller whether the caller may\nrelease some of the waiter processes. This simplifies the related\nfunctions and make it (to me) clearer.\n\nPlease find the attached.\n\n\n> (Apropos to that, I realized from working on this patch that there's\n> another, completely undocumented assumption in the existing code, that\n> the integer list will be sorted by walsender index for equal priorities.\n> I don't like that either, and not just because it's undocumented.)\n\nThat seems accidentally. Sorting by priority is the disigned behavior\nand documented, in contrast, entries of the same priority are ordered\nin index order by accident and not documented, that means it can be\nchanged anytime. I think we don't define everyting in such detail.\n\nregards.", "msg_date": "Thu, 16 Apr 2020 16:22:41 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Thu, 16 Apr 2020 at 16:22, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Wed, 15 Apr 2020 11:31:49 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > At Tue, 14 Apr 2020 16:32:40 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > > + stby->is_sync_standby = true; /* might change below */\n> >\n> > > I'm uneasy with that. In quorum mode all running standbys are marked\n> > > as \"sync\" and that's bogus.\n> >\n> > I don't follow that? The existing coding of SyncRepGetSyncStandbysQuorum\n> > returns all the candidates in its list, so this is isomorphic to that.\n>\n> The existing code actully does that. On the other hand\n> SyncRepGetSyncStandbysPriority returns standbys that *are known to be*\n> synchronous, but *Quorum returns standbys that *can be* synchronous.\n> What the two functions return are different from each other. So it\n> should be is_sync_standby for -Priority and is_sync_candidate for\n> -Quorum.\n>\n> > Possibly a different name for the flag would be more suited?\n> >\n> > > On the other hand sync_standbys is already sorted in priority order so I think we can get rid of the member by setting *am_sync as the follows.\n> >\n> > > SyncRepGetSyncRecPtr:\n> > > if (sync_standbys[i].is_me)\n> > > {\n> > > *am_sync = (i < SyncRepConfig->num_sync);\n> > > break;\n> > > }\n> >\n> > I disagree with this, it will change the behavior in the quorum case.\n>\n> Oops, you're right. I find the whole thing there (and me) is a bit\n> confusing. syncrep_method affects how some values (specifically\n> am_sync and sync_standbys) are translated at several calling depths.\n> And the *am_sync informs nothing in quorum mode.\n>\n> > In any case, a change like this will cause callers to know way more than\n> > they ought to about the ordering of the array. In my mind, the fact that\n> > SyncRepGetSyncStandbysPriority is sorting the array is an internal\n> > implementation detail; I do not want it to be part of the API.\n>\n> Anyway the am_sync and is_sync_standby is utterly useless in quorum\n> mode. That discussion is pretty persuasive if not, but actually the\n> upper layers (SyncRepReleaseWaiters and SyncRepGetSyncRecPtr) referes\n> to syncrep_method to differentiate the interpretation of the am_sync\n> flag and sync_standbys list. So anyway the difference is actually a\n> part of API.\n>\n> After thinking some more, I concluded that some of the variables are\n> wrongly named or considered, and redundant. The fucntion of am_sync is\n> covered by got_recptr in SyncRepReleaseWaiters, so it's enough that\n> SyncRepGetSyncRecPtr just reports to the caller whether the caller may\n> release some of the waiter processes. This simplifies the related\n> functions and make it (to me) clearer.\n>\n> Please find the attached.\n>\n>\n> > (Apropos to that, I realized from working on this patch that there's\n> > another, completely undocumented assumption in the existing code, that\n> > the integer list will be sorted by walsender index for equal priorities.\n> > I don't like that either, and not just because it's undocumented.)\n>\n> That seems accidentally. Sorting by priority is the disigned behavior\n> and documented, in contrast, entries of the same priority are ordered\n> in index order by accident and not documented, that means it can be\n> changed anytime. I think we don't define everyting in such detail.\n>\n\nThis is just a notice; I'm reading your latest patch but it seems to\ninclude unrelated changes:\n\n$ git diff --stat\n src/backend/replication/syncrep.c | 475\n+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------------------------------------------------------------------------------------------------------------------------------------\n src/backend/replication/walsender.c | 40 ++++++++++++++-----\n src/bin/pg_dump/compress_io.c | 12 ++++++\n src/bin/pg_dump/pg_backup_directory.c | 48 ++++++++++++++++++-----\n src/include/replication/syncrep.h | 20 +++++++++-\n src/include/replication/walsender_private.h | 16 ++++----\n 6 files changed, 274 insertions(+), 337 deletions(-)\n\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Thu, 16 Apr 2020 16:48:28 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Thu, 16 Apr 2020 16:48:28 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> This is just a notice; I'm reading your latest patch but it seems to\n> include unrelated changes:\n> \n> $ git diff --stat\n> src/backend/replication/syncrep.c | 475\n> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-----------------------------------------------------------------------------------------------------------------------------------------------\n> src/backend/replication/walsender.c | 40 ++++++++++++++-----\n> src/bin/pg_dump/compress_io.c | 12 ++++++\n> src/bin/pg_dump/pg_backup_directory.c | 48 ++++++++++++++++++-----\n> src/include/replication/syncrep.h | 20 +++++++++-\n> src/include/replication/walsender_private.h | 16 ++++----\n> 6 files changed, 274 insertions(+), 337 deletions(-)\n\nUgg. I failed to clean up working directory.. I didn't noticed as I\nmade the file by git diff. Thanks for noticing me of that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 16 Apr 2020 18:26:26 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> [ syncrep-fixes-4.patch ]\n\nI agree that we could probably improve the clarity of this code with\nfurther rewriting, but I'm still very opposed to the idea of having\ncallers know that the first num_sync array elements are the active\nones. It's wrong (or at least different from current behavior) for\nquorum mode, where there might be more than num_sync walsenders to\nconsider. And it might not generalize very well to other syncrep\nselection rules we might add in future, which might also not have\nexactly num_sync interesting walsenders. So I much prefer an API\ndefinition that uses bool flags in an array that has no particular\nordering (so far as the callers know, anyway). If you don't like\nis_sync_standby, how about some more-neutral name like is_active\nor is_interesting or include_position?\n\nI dislike the proposed comment revisions in SyncRepReleaseWaiters,\ntoo, particularly the change to say that what we're \"announcing\"\nis the ability to release waiters. You did not change the actual\nlog messages, and you would have gotten a lot of pushback if\nyou tried, because the current messages make sense to users and\nsomething like that would not. But by the same token this new\ncomment isn't too helpful to somebody reading the code.\n\n(Actually, I wonder why we even have the restriction that only\nsync standbys can release waiters. It's not like they are\ngoing to get different results from SyncRepGetSyncRecPtr than\nany other walsender would. Maybe we should just drop all the\nam_sync logic?)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 11:39:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "\n\nOn 2020/04/14 22:52, Tom Lane wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> SyncRepGetSyncStandbysPriority() is runing holding SyncRepLock so\n>> sync_standby_priority of any walsender can be changed while the\n>> function is scanning welsenders. The issue is we already have\n>> inconsistent walsender information before we enter the function. Thus\n>> how many times we scan on the array doesn't make any difference.\n> \n> *Yes it does*. The existing code can deliver entirely broken results\n> if some walsender exits between where we examine the priorities and\n> where we fetch the WAL pointers.\n\nSo, in this case, the oldest lsn that SyncRepGetOldestSyncRecPtr()\ncalculates may be based on also the lsn of already-exited walsender.\nThis is what you say \"broken results\"? If yes, ISTM that this issue still\nremains even after applying your patch. No? The walsender marked\nas sync may still exit just before SyncRepGetOldestSyncRecPtr()\ncalculates the oldest lsn.\n\nIMO that the broken results can be delivered when walsender marked\nas sync exits *and* new walsender comes at that moment. If this new\nwalsender uses the WalSnd slot that the exited walsender used,\nSyncRepGetOldestSyncRecPtr() wronly calculates the oldest lsn based\non this new walsender (i.e., different walsender from one marked as sync).\nIf this is actually what you tried to say \"broken results\", your patch\nseems fine and fixes the issue.\n\nBTW, since the patch changes the API of SyncRepGetSyncStandbys(),\nit should not be back-patched to avoid ABI break. Right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Apr 2020 02:20:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/04/14 22:52, Tom Lane wrote:\n>> *Yes it does*. The existing code can deliver entirely broken results\n>> if some walsender exits between where we examine the priorities and\n>> where we fetch the WAL pointers.\n\n> IMO that the broken results can be delivered when walsender marked\n> as sync exits *and* new walsender comes at that moment. If this new\n> walsender uses the WalSnd slot that the exited walsender used,\n> SyncRepGetOldestSyncRecPtr() wronly calculates the oldest lsn based\n> on this new walsender (i.e., different walsender from one marked as sync).\n\nRight, exactly, sorry that I was not more specific.\n\n> BTW, since the patch changes the API of SyncRepGetSyncStandbys(),\n> it should not be back-patched to avoid ABI break. Right?\n\nAnything that is using that is just as broken as the core code is, for the\nsame reasons, so I don't have a problem with changing its API. Maybe we\nshould rename it while we're at it, just to make it clear that we are\nbreaking any external callers. (If there are any, which seems somewhat\nunlikely.)\n\nThe only concession to ABI that I had in mind was to not re-order\nthe fields of WalSnd in the back branches.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Apr 2020 14:00:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Thu, 16 Apr 2020 11:39:06 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in \n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > [ syncrep-fixes-4.patch ]\n> \n> I agree that we could probably improve the clarity of this code with\n> further rewriting, but I'm still very opposed to the idea of having\n> callers know that the first num_sync array elements are the active\n> ones. It's wrong (or at least different from current behavior) for\n> quorum mode, where there might be more than num_sync walsenders to\n> consider. And it might not generalize very well to other syncrep\n> selection rules we might add in future, which might also not have\n> exactly num_sync interesting walsenders. So I much prefer an API\n> definition that uses bool flags in an array that has no particular\n> ordering (so far as the callers know, anyway). If you don't like\n> is_sync_standby, how about some more-neutral name like is_active\n> or is_interesting or include_position?\n\nI'm convinced that each element has is_sync_standby. I agree to the\nname is_sync_standby since I don't come up with a better name.\n\n> I dislike the proposed comment revisions in SyncRepReleaseWaiters,\n> too, particularly the change to say that what we're \"announcing\"\n> is the ability to release waiters. You did not change the actual\n> log messages, and you would have gotten a lot of pushback if\n> you tried, because the current messages make sense to users and\n> something like that would not. But by the same token this new\n> comment isn't too helpful to somebody reading the code.\n\nThe current log messages look perfect to me. I don't insist on the\ncomment change since I might take the definition of \"sync standby\" too\nstrictly.\n\n> (Actually, I wonder why we even have the restriction that only\n> sync standbys can release waiters. It's not like they are\n> going to get different results from SyncRepGetSyncRecPtr than\n> any other walsender would. Maybe we should just drop all the\n> am_sync logic?)\n\nI thought the same thing, though I didn't do that in the last patch.\n\nam_sync seems intending to reduce spurious wakeups but actually\nspurious wakeup won't increase even without it. Thus the only\nremaining task of am_sync is the trigger for the log messages and that\nfact is the sign that the log messages should be emitted within\nSyncRepGetSyncRecPtr. That eliminates references to SyncRepConfig in\nSyncRepReleaseWaiters, which make me feel ease.\n\nThe attached is baed on syncrep-fixes-1.patch + am_sync elimination.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Fri, 17 Apr 2020 14:58:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "\n\nOn 2020/04/17 14:58, Kyotaro Horiguchi wrote:\n> At Thu, 16 Apr 2020 11:39:06 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n>> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>>> [ syncrep-fixes-4.patch ]\n>>\n>> I agree that we could probably improve the clarity of this code with\n>> further rewriting, but I'm still very opposed to the idea of having\n>> callers know that the first num_sync array elements are the active\n>> ones. It's wrong (or at least different from current behavior) for\n>> quorum mode, where there might be more than num_sync walsenders to\n>> consider. And it might not generalize very well to other syncrep\n>> selection rules we might add in future, which might also not have\n>> exactly num_sync interesting walsenders. So I much prefer an API\n>> definition that uses bool flags in an array that has no particular\n>> ordering (so far as the callers know, anyway). If you don't like\n>> is_sync_standby, how about some more-neutral name like is_active\n>> or is_interesting or include_position?\n> \n> I'm convinced that each element has is_sync_standby. I agree to the\n> name is_sync_standby since I don't come up with a better name.\n> \n>> I dislike the proposed comment revisions in SyncRepReleaseWaiters,\n>> too, particularly the change to say that what we're \"announcing\"\n>> is the ability to release waiters. You did not change the actual\n>> log messages, and you would have gotten a lot of pushback if\n>> you tried, because the current messages make sense to users and\n>> something like that would not. But by the same token this new\n>> comment isn't too helpful to somebody reading the code.\n> \n> The current log messages look perfect to me. I don't insist on the\n> comment change since I might take the definition of \"sync standby\" too\n> strictly.\n> \n>> (Actually, I wonder why we even have the restriction that only\n>> sync standbys can release waiters. It's not like they are\n>> going to get different results from SyncRepGetSyncRecPtr than\n>> any other walsender would. Maybe we should just drop all the\n>> am_sync logic?)\n> \n> I thought the same thing, though I didn't do that in the last patch.\n> \n> am_sync seems intending to reduce spurious wakeups but actually\n> spurious wakeup won't increase even without it. Thus the only\n> remaining task of am_sync is the trigger for the log messages and that\n> fact is the sign that the log messages should be emitted within\n> SyncRepGetSyncRecPtr. That eliminates references to SyncRepConfig in\n> SyncRepReleaseWaiters, which make me feel ease.\n> \n> The attached is baed on syncrep-fixes-1.patch + am_sync elimination.\n\nI agree that it might be worth considering the removal of am_sync for\nthe master branch or v14. But I think that it should not be back-patched.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 17 Apr 2020 16:03:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On 2020/04/17 3:00, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> On 2020/04/14 22:52, Tom Lane wrote:\n>>> *Yes it does*. The existing code can deliver entirely broken results\n>>> if some walsender exits between where we examine the priorities and\n>>> where we fetch the WAL pointers.\n> \n>> IMO that the broken results can be delivered when walsender marked\n>> as sync exits *and* new walsender comes at that moment. If this new\n>> walsender uses the WalSnd slot that the exited walsender used,\n>> SyncRepGetOldestSyncRecPtr() wronly calculates the oldest lsn based\n>> on this new walsender (i.e., different walsender from one marked as sync).\n> \n> Right, exactly, sorry that I was not more specific.\n> \n>> BTW, since the patch changes the API of SyncRepGetSyncStandbys(),\n>> it should not be back-patched to avoid ABI break. Right?\n> \n> Anything that is using that is just as broken as the core code is, for the\n> same reasons, so I don't have a problem with changing its API. Maybe we\n> should rename it while we're at it, just to make it clear that we are\n> breaking any external callers. (If there are any, which seems somewhat\n> unlikely.)\n\nI agree to change the API if that's the only way to fix the bug. But ISTM that\nwe can fix the bug without changing the API, like the attached patch does.\n\nYour patch changes the logic to pick up sync standbys, e.g., use qsort(),\nin addition to the bug fix. This might be an improvement and I agree that\nit's worth considering that idea for the master branch or v14. But I'm not\nfan of adding such changes into the back branches if they are not\nnecessary for the bug fix. I like to basically keep the current logic as it is,\nat least for the back branch, like the attached patch does.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Fri, 17 Apr 2020 16:31:36 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Fri, 17 Apr 2020 at 14:58, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>\n> At Thu, 16 Apr 2020 11:39:06 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote in\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > > [ syncrep-fixes-4.patch ]\n> >\n> > I agree that we could probably improve the clarity of this code with\n> > further rewriting, but I'm still very opposed to the idea of having\n> > callers know that the first num_sync array elements are the active\n> > ones. It's wrong (or at least different from current behavior) for\n> > quorum mode, where there might be more than num_sync walsenders to\n> > consider. And it might not generalize very well to other syncrep\n> > selection rules we might add in future, which might also not have\n> > exactly num_sync interesting walsenders. So I much prefer an API\n> > definition that uses bool flags in an array that has no particular\n> > ordering (so far as the callers know, anyway). If you don't like\n> > is_sync_standby, how about some more-neutral name like is_active\n> > or is_interesting or include_position?\n>\n> I'm convinced that each element has is_sync_standby. I agree to the\n> name is_sync_standby since I don't come up with a better name.\n>\n> > I dislike the proposed comment revisions in SyncRepReleaseWaiters,\n> > too, particularly the change to say that what we're \"announcing\"\n> > is the ability to release waiters. You did not change the actual\n> > log messages, and you would have gotten a lot of pushback if\n> > you tried, because the current messages make sense to users and\n> > something like that would not. But by the same token this new\n> > comment isn't too helpful to somebody reading the code.\n>\n> The current log messages look perfect to me. I don't insist on the\n> comment change since I might take the definition of \"sync standby\" too\n> strictly.\n>\n> > (Actually, I wonder why we even have the restriction that only\n> > sync standbys can release waiters. It's not like they are\n> > going to get different results from SyncRepGetSyncRecPtr than\n> > any other walsender would. Maybe we should just drop all the\n> > am_sync logic?)\n>\n> I thought the same thing, though I didn't do that in the last patch.\n>\n> am_sync seems intending to reduce spurious wakeups but actually\n> spurious wakeup won't increase even without it. Thus the only\n> remaining task of am_sync is the trigger for the log messages and that\n> fact is the sign that the log messages should be emitted within\n> SyncRepGetSyncRecPtr. That eliminates references to SyncRepConfig in\n> SyncRepReleaseWaiters, which make me feel ease.\n>\n> The attached is baed on syncrep-fixes-1.patch + am_sync elimination.\n>\n\nJust for confirmation, since the new approach doesn't change that\nwalsenders reload new config at their convenient timing, it still can\nhappen that a walsender releases waiters according to the old config\nthat defines fewer number of sync standbys, during walsenders\nabsorbing a change in the set of synchronous walsenders. In the worst\ncase where the master crashes in the middle, we cannot be sure how\nmany sync servers the data has been replicated to. Is that right?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Fri, 17 Apr 2020 17:03:11 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Fri, 17 Apr 2020 16:03:34 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> I agree that it might be worth considering the removal of am_sync for\n> the master branch or v14. But I think that it should not be\n> back-patched.\n\nAh! Agreed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Apr 2020 17:07:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "At Fri, 17 Apr 2020 17:03:11 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n> On Fri, 17 Apr 2020 at 14:58, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > The attached is baed on syncrep-fixes-1.patch + am_sync elimination.\n> >\n> \n> Just for confirmation, since the new approach doesn't change that\n> walsenders reload new config at their convenient timing, it still can\n> happen that a walsender releases waiters according to the old config\n> that defines fewer number of sync standbys, during walsenders\n\nRight.\n\n> absorbing a change in the set of synchronous walsenders. In the worst\n> case where the master crashes in the middle, we cannot be sure how\n> many sync servers the data has been replicated to. Is that right?\n\nWal senders can set a stupid value as priority or in a worse case the\nshared walsender information might be of another walsender that is\nlaunched just now. In any case SyncRepGetSyncStandbys can return a set\nof walsenders with descending priority (in priority mode). What can\nbe happen in the worst case is some transactions are released by a bit\nwrong LSN information. Such inconsistency also can be happen when the\noldest sync standby in priority mode goes out and sync-LSN goes back\neven if the wal-sender list is strictly kept consistent.\n\nIn quorum mode, we cannot even know which servers that endorsed the\nmaster's commit after a crash.\n\nI don't come up of clean solution for such inconsistency or\nunrecoverability(?) for now..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 17 Apr 2020 17:41:24 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 17 Apr 2020 16:03:34 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n>> I agree that it might be worth considering the removal of am_sync for\n>> the master branch or v14. But I think that it should not be\n>> back-patched.\n\n> Ah! Agreed.\n\nYeah, that's not necessary to fix the bug. I'd be inclined to leave\nit for v14 at this point.\n\nI don't much like the patch Fujii-san posted, though. An important part\nof the problem, IMO, is that SyncRepGetSyncStandbysPriority is too\ncomplicated and it's unclear what dependencies it has on the set of\npriorities in shared memory being consistent. His patch does not improve\nthat situation; if anything it makes it worse.\n\nIf we're concerned about not breaking ABI in the back branches, what\nI propose we do about that is just leave SyncRepGetSyncStandbys in\nplace but not used by the core code, and remove it only in HEAD.\nWe can do an absolutely minimal fix for the assertion failure, in\ncase anybody is calling that code, by just dropping the Assert and\nletting SyncRepGetSyncStandbys return NIL if it falls out. (Or we\ncould let it return the incomplete list, which'd be the behavior\nyou get today in a non-assert build.)\n\nAlso, I realized while re-reading my patch that Kyotaro-san is onto\nsomething about the is_sync_standby flag not being necessary: instead\nwe can just have the new function SyncRepGetCandidateStandbys return\na reduced count. I'd initially believed that it was necessary for\nthat function to return the rejected candidate walsenders along with\nthe accepted ones, but that was a misunderstanding. I still don't\nwant its API spec to say anything about ordering of the result array,\nbut we don't need to.\n\nSo that leads me to the attached. I propose applying this to the\nback branches except for the rearrangement of WALSnd field order.\nIn HEAD, I'd remove SyncRepGetSyncStandbys and subroutines altogether.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 17 Apr 2020 11:31:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> At Fri, 17 Apr 2020 17:03:11 +0900, Masahiko Sawada <masahiko.sawada@2ndquadrant.com> wrote in \n>> On Fri, 17 Apr 2020 at 14:58, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>> Just for confirmation, since the new approach doesn't change that\n>> walsenders reload new config at their convenient timing, it still can\n>> happen that a walsender releases waiters according to the old config\n>> that defines fewer number of sync standbys, during walsenders\n\n> Right.\n\n>> absorbing a change in the set of synchronous walsenders. In the worst\n>> case where the master crashes in the middle, we cannot be sure how\n>> many sync servers the data has been replicated to. Is that right?\n\n> Wal senders can set a stupid value as priority or in a worse case the\n> shared walsender information might be of another walsender that is\n> launched just now. In any case SyncRepGetSyncStandbys can return a set\n> of walsenders with descending priority (in priority mode). What can\n> be happen in the worst case is some transactions are released by a bit\n> wrong LSN information. Such inconsistency also can be happen when the\n> oldest sync standby in priority mode goes out and sync-LSN goes back\n> even if the wal-sender list is strictly kept consistent.\n\nI don't really see a problem here. It's true that transactions might\nbe released based on either the old or the new value of num_sync,\ndepending on whether the particular walsender executing the release\nlogic has noticed the SIGHUP yet. But if a transaction was released,\nthen there were at least num_sync confirmed transmissions of data\nto someplace, so it's not like you've got no redundancy at all.\n\nThe only thing that seems slightly odd is that there could in principle\nbe some transactions released on the basis of the new num_sync, and\nthen slightly later some transactions released on the basis of the old\nnum_sync. But I don't think it's really going to be possible to avoid\nthat, given that the GUC update is propagated in an asynchronous\nfashion.\n\nI spent a few moments wondering if we could avoid such cases by having\nSyncRepReleaseWaiters check for GUC updates after it's acquired\nSyncRepLock. But that wouldn't really guarantee much, since the\npostmaster can't deliver SIGHUP to all the walsenders simultaneously.\nI think the main practical effect would be to allow some possibly-slow\nprocessing to happen while holding SyncRepLock, which surely isn't a\ngreat idea.\n\nBTW, it might be worth documenting in this thread that my proposed\npatch intentionally doesn't move SyncRepReleaseWaiters' acquisition\nof SyncRepLock. With the patch, SyncRepGetSyncRecPtr does not require\nSyncRepLock so one could consider acquiring that lock only while updating \nwalsndctl and releasing waiters. My concern about that is that then\nit'd be possible for a later round of waiter-releasing to happen on the\nbasis of slightly older SyncRepGetSyncRecPtr results, if a walsender that\nhad done SyncRepGetSyncRecPtr first were only able to acquire the lock\nsecond. Perhaps that would be okay, but I'm not sure, so I left it\nalone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Apr 2020 12:30:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Sat, 18 Apr 2020 at 00:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> > At Fri, 17 Apr 2020 16:03:34 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n> >> I agree that it might be worth considering the removal of am_sync for\n> >> the master branch or v14. But I think that it should not be\n> >> back-patched.\n>\n> > Ah! Agreed.\n>\n> Yeah, that's not necessary to fix the bug. I'd be inclined to leave\n> it for v14 at this point.\n>\n> I don't much like the patch Fujii-san posted, though. An important part\n> of the problem, IMO, is that SyncRepGetSyncStandbysPriority is too\n> complicated and it's unclear what dependencies it has on the set of\n> priorities in shared memory being consistent. His patch does not improve\n> that situation; if anything it makes it worse.\n>\n> If we're concerned about not breaking ABI in the back branches, what\n> I propose we do about that is just leave SyncRepGetSyncStandbys in\n> place but not used by the core code, and remove it only in HEAD.\n> We can do an absolutely minimal fix for the assertion failure, in\n> case anybody is calling that code, by just dropping the Assert and\n> letting SyncRepGetSyncStandbys return NIL if it falls out. (Or we\n> could let it return the incomplete list, which'd be the behavior\n> you get today in a non-assert build.)\n\n+1\n\n>\n> Also, I realized while re-reading my patch that Kyotaro-san is onto\n> something about the is_sync_standby flag not being necessary: instead\n> we can just have the new function SyncRepGetCandidateStandbys return\n> a reduced count. I'd initially believed that it was necessary for\n> that function to return the rejected candidate walsenders along with\n> the accepted ones, but that was a misunderstanding. I still don't\n> want its API spec to say anything about ordering of the result array,\n> but we don't need to.\n>\n> So that leads me to the attached. I propose applying this to the\n> back branches except for the rearrangement of WALSnd field order.\n> In HEAD, I'd remove SyncRepGetSyncStandbys and subroutines altogether.\n>\n\n+ /* Quick out if not even configured to be synchronous */\n+ if (SyncRepConfig == NULL)\n+ return false;\n\nI felt strange a bit that we do the above check in\nSyncRepGetSyncRecPtr() because SyncRepReleaseWaiters() which is the\nonly caller says the following before calling it:\n\n /*\n * We're a potential sync standby. Release waiters if there are enough\n * sync standbys and we are considered as sync.\n */\n LWLockAcquire(SyncRepLock, LW_EXCLUSIVE);\n\nCan we either change it to an assertion, move it to before acquiring\nSyncRepLock in SyncRepReleaseWaiters or just remove it?\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 18 Apr 2020 12:38:54 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> On Sat, 18 Apr 2020 at 00:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> + /* Quick out if not even configured to be synchronous */\n>> + if (SyncRepConfig == NULL)\n>> + return false;\n\n> I felt strange a bit that we do the above check in\n> SyncRepGetSyncRecPtr() because SyncRepReleaseWaiters() which is the\n> only caller says the following before calling it:\n\nNotice there was such a test in SyncRepGetSyncRecPtr already --- I just\nmoved it to be before doing some work instead of after.\n\n> Can we either change it to an assertion, move it to before acquiring\n> SyncRepLock in SyncRepReleaseWaiters or just remove it?\n\nI have no objection to that in principle, but it seems like it's a\nchange in SyncRepGetSyncRecPtr's API that is not necessary to fix\nthis bug. So I'd rather leave it to happen along with the larger\nAPI changes (getting rid of am_sync) that are proposed for v14.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 18 Apr 2020 12:00:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "On Sun, 19 Apr 2020 at 01:00, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Masahiko Sawada <masahiko.sawada@2ndquadrant.com> writes:\n> > On Sat, 18 Apr 2020 at 00:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> + /* Quick out if not even configured to be synchronous */\n> >> + if (SyncRepConfig == NULL)\n> >> + return false;\n>\n> > I felt strange a bit that we do the above check in\n> > SyncRepGetSyncRecPtr() because SyncRepReleaseWaiters() which is the\n> > only caller says the following before calling it:\n>\n> Notice there was such a test in SyncRepGetSyncRecPtr already --- I just\n> moved it to be before doing some work instead of after.\n>\n> > Can we either change it to an assertion, move it to before acquiring\n> > SyncRepLock in SyncRepReleaseWaiters or just remove it?\n>\n> I have no objection to that in principle, but it seems like it's a\n> change in SyncRepGetSyncRecPtr's API that is not necessary to fix\n> this bug. So I'd rather leave it to happen along with the larger\n> API changes (getting rid of am_sync) that are proposed for v14.\n\nAgreed.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:35:23 +0900", "msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" }, { "msg_contents": "\n\nOn 2020/04/18 0:31, Tom Lane wrote:\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n>> At Fri, 17 Apr 2020 16:03:34 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> I agree that it might be worth considering the removal of am_sync for\n>>> the master branch or v14. But I think that it should not be\n>>> back-patched.\n> \n>> Ah! Agreed.\n> \n> Yeah, that's not necessary to fix the bug. I'd be inclined to leave\n> it for v14 at this point.\n> \n> I don't much like the patch Fujii-san posted, though. An important part\n> of the problem, IMO, is that SyncRepGetSyncStandbysPriority is too\n> complicated and it's unclear what dependencies it has on the set of\n> priorities in shared memory being consistent. His patch does not improve\n> that situation; if anything it makes it worse.\n\nUnderstood.\n\n> \n> If we're concerned about not breaking ABI in the back branches, what\n> I propose we do about that is just leave SyncRepGetSyncStandbys in\n> place but not used by the core code, and remove it only in HEAD.\n> We can do an absolutely minimal fix for the assertion failure, in\n> case anybody is calling that code, by just dropping the Assert and\n> letting SyncRepGetSyncStandbys return NIL if it falls out. (Or we\n> could let it return the incomplete list, which'd be the behavior\n> you get today in a non-assert build.)\n> \n> Also, I realized while re-reading my patch that Kyotaro-san is onto\n> something about the is_sync_standby flag not being necessary: instead\n> we can just have the new function SyncRepGetCandidateStandbys return\n> a reduced count. I'd initially believed that it was necessary for\n> that function to return the rejected candidate walsenders along with\n> the accepted ones, but that was a misunderstanding. I still don't\n> want its API spec to say anything about ordering of the result array,\n> but we don't need to.\n> \n> So that leads me to the attached. I propose applying this to the\n> back branches except for the rearrangement of WALSnd field order.\n> In HEAD, I'd remove SyncRepGetSyncStandbys and subroutines altogether.\n\nThanks for making and committing the patch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 20 Apr 2020 15:05:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in SyncRepGetSyncStandbysPriority" } ]
[ { "msg_contents": "Hi,\n\nI've found rr [1] very useful to debug issues in postgres. The ability\nto hit a bug, and then e.g. identify a pointer with problematic\ncontents, set a watchpoint on its contents, and reverse-continue is\nextremely powerful.\n\nUnfortunately, when running postgres, it currently occasionally triggers\nspurious stack-too-deep errors. That turns out to be because it has to\nuse an alternative stack in some corner cases (IIUC when a signal\narrives while already in a signal handler). That corner case can\nunfortunately be hit from within postmaster, and at least can lead to\nsigusr1_handler() being called with an alternative stack set.\n\nUnfortunately that means that processes that postmaster fork()s while\nusing that alternative stack will continue their live using that\nalternative stack. Which then subsequently means that our stack depth\nchecks always trigger.\n\nI've not seen this trigger for normal backends (which makes sense,\nthey're not started from a signal handler), but for bgworkers. In\nparticular parallel workers are prone to hit the issue.\n\n\nI've locally fixed the issue by computing the stack base address anew\nfor postmaster children. Currently in InitPostmasterChild().\n\nI'd like to get that change upstream. The rr hackers have fixed a number\nof other issues that could be hit with postgres, but they couldn't see a\ngood way to address the potential for a different signal stack in this\nedge case. And it doesn't seem crazy to me to compute the stack base\nagain in postmaster children: It's cheap enough and it's extremely\nunlikely that postmaster uses up a crazy amount of stack.\n\nI also don't find it too crazy to guard against forks in signal handlers\nleading to a different stack base address. It's a pretty odd thing to\ndo.\n\n\nTom, while imo not a fix of the right magnitude here: Are you planning /\nhoping to work again on your postmaster latch patch? I think it'd be\nreally good if we could restructure the postmaster code to do far far\nless in signal handlers. And the postmaster latch patch seems like a big\nstep in that direction. I think we mostly dropped it due to the release\nschedule last time round?\n\nGreetings,\n\nAndres Freund\n\n[1] https://github.com/mozilla/rr/\n\n\n", "msg_date": "Fri, 27 Mar 2020 11:22:17 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I've locally fixed the issue by computing the stack base address anew\n> for postmaster children. Currently in InitPostmasterChild().\n\n> I'd like to get that change upstream. The rr hackers have fixed a number\n> of other issues that could be hit with postgres, but they couldn't see a\n> good way to address the potential for a different signal stack in this\n> edge case. And it doesn't seem crazy to me to compute the stack base\n> again in postmaster children: It's cheap enough and it's extremely\n> unlikely that postmaster uses up a crazy amount of stack.\n\nSeems reasonable. I think we'd probably also need this in the\nEXEC_BACKEND case, in case ASLR puts the child process's stack\nsomewhere else. Can you merge your concern with that one?\n\nOn the other hand, it might be better to not launch children from the\nsignal handler, because I don't think we should assume the alternate\nstack can grow as large as the main one. Does POSIX talk about this?\n\n> Tom, while imo not a fix of the right magnitude here: Are you planning /\n> hoping to work again on your postmaster latch patch?\n\nUm ... -ESWAPPEDOUT. What are you thinking of?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 14:34:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "Hi,\n\nOn 2020-03-27 14:34:56 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > I've locally fixed the issue by computing the stack base address anew\n> > for postmaster children. Currently in InitPostmasterChild().\n>\n> > I'd like to get that change upstream. The rr hackers have fixed a number\n> > of other issues that could be hit with postgres, but they couldn't see a\n> > good way to address the potential for a different signal stack in this\n> > edge case. And it doesn't seem crazy to me to compute the stack base\n> > again in postmaster children: It's cheap enough and it's extremely\n> > unlikely that postmaster uses up a crazy amount of stack.\n>\n> Seems reasonable. I think we'd probably also need this in the\n> EXEC_BACKEND case, in case ASLR puts the child process's stack\n> somewhere else. Can you merge your concern with that one?\n\nWe currently already do that there, in SubPostmasterMain(). If we add a\nset_stack_base() to InitPostmasterChild() we can remove it from there,\nthough.\n\n\n> On the other hand, it might be better to not launch children from the\n> signal handler, because I don't think we should assume the alternate\n> stack can grow as large as the main one. Does POSIX talk about this?\n\nI strongly agree that it'd be better - independent of what we conclude\nre a localized fix for rr. I think I looked for what specs around this a\nwhile ago and couldn't find much. fork() is listed as signal safe (but\nthere was discussion about removing it - going nowhere I think).\n\n\n> > Tom, while imo not a fix of the right magnitude here: Are you planning /\n> > hoping to work again on your postmaster latch patch?\n>\n> Um ... -ESWAPPEDOUT. What are you thinking of?\n\nhttps://postgr.es/m/18193.1492793404%40sss.pgh.pa.us\n\nThat doesn't convert all that much of postmaster to latches, but once\nthe basic infrastructure is in place, it doesn't seem too hard to\nconvert more. In particular sigusr1_handler, which is the relevant one\nhere, looks fairly easy. SIGHUP_handler(), reaper() shouldn't be hard\neither. Whether it could make sense to convert pmdie for SIGQUIT is less\nclear to me, but also seems less clearly necessary: We don't fork, and\nshutting down anyway.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Mar 2020 11:51:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-27 14:34:56 -0400, Tom Lane wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Tom, while imo not a fix of the right magnitude here: Are you planning /\n>>> hoping to work again on your postmaster latch patch?\n\n>> Um ... -ESWAPPEDOUT. What are you thinking of?\n\n> https://postgr.es/m/18193.1492793404%40sss.pgh.pa.us\n\nOh, I thought we'd dropped that line of thinking in favor of trying\nto not do work in the postmaster signal handlers (i.e. I thought *you*\nwere pushing this forward, not me).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 14:59:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "Hi,\n\nOn 2020-03-27 14:59:56 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-03-27 14:34:56 -0400, Tom Lane wrote:\n> >> Andres Freund <andres@anarazel.de> writes:\n> >>> Tom, while imo not a fix of the right magnitude here: Are you planning /\n> >>> hoping to work again on your postmaster latch patch?\n> \n> >> Um ... -ESWAPPEDOUT. What are you thinking of?\n> \n> > https://postgr.es/m/18193.1492793404%40sss.pgh.pa.us\n> \n> Oh, I thought we'd dropped that line of thinking in favor of trying\n> to not do work in the postmaster signal handlers (i.e. I thought *you*\n> were pushing this forward, not me).\n\nHm - the way I imagine that to work is that we'd do a SetLatch() in the\nvarious signal handlers and that the main loop would then react to\ngot_sigchld type variables. But for that we'd need latch support in\npostmaster - which I think is pretty exactly what your patch in the\nabove message does?\n\nOf course there'd need to be several subsequent patches to move work out\nof signal handlers into the main loop.\n\nWere you thinking of somehow doing that without using a latch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 27 Mar 2020 13:39:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "On Fri, Mar 27, 2020 at 11:22 AM Andres Freund <andres@anarazel.de> wrote:\n> I've found rr [1] very useful to debug issues in postgres. The ability\n> to hit a bug, and then e.g. identify a pointer with problematic\n> contents, set a watchpoint on its contents, and reverse-continue is\n> extremely powerful.\n\nI agree that rr is very useful. It would be great if we had a totally\nsmooth workflow for debugging using rr.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 4 Apr 2020 21:02:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "On 2020-04-04 21:02:56 -0700, Peter Geoghegan wrote:\n> On Fri, Mar 27, 2020 at 11:22 AM Andres Freund <andres@anarazel.de> wrote:\n> > I've found rr [1] very useful to debug issues in postgres. The ability\n> > to hit a bug, and then e.g. identify a pointer with problematic\n> > contents, set a watchpoint on its contents, and reverse-continue is\n> > extremely powerful.\n> \n> I agree that rr is very useful. It would be great if we had a totally\n> smooth workflow for debugging using rr.\n\nI just pushed that.\n\n\n", "msg_date": "Sun, 5 Apr 2020 18:54:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "On Sun, Apr 5, 2020 at 6:54 PM Andres Freund <andres@anarazel.de> wrote:\n> I just pushed that.\n\nGreat!\n\nI have found that it's useful to use rr to debug Postgres by following\ncertain recipes. I'll share some of the details now, in case anybody\nelse wants to start using rr and isn't sure where to start.\n\nI have a script that records a postgres session using rr with these options:\n\nrr record -M /code/postgresql/$BRANCH/install/bin/postgres \\\n -D /code/postgresql/$BRANCH/data \\\n --log_line_prefix=\"%m %p \" \\\n --autovacuum=off \\\n --effective_cache_size=1GB \\\n --random_page_cost=4.0 \\\n --work_mem=4MB \\\n --maintenance_work_mem=64MB \\\n --fsync=off \\\n --log_statement=all \\\n --log_min_messages=DEBUG5 \\\n --max_connections=50 \\\n --shared_buffers=32MB\n\nMost of these settings were taken from a similar script that I use to\nrun Valgrind, so the particulars may not matter much -- though it's\nuseful to make the server logs as verbose as possible (you'll see why\nin a minute).\n\nI find it quite practical to run \"make installcheck\" against the\nserver, recording the entire execution. I find that it's not that much\nslower than just running the tests against a regular debug build of\nPostgres. It's still much faster than Valgrind, for example.\n(Replaying the recording seems to be where having a high end machine\nhelps a lot.)\n\nOnce the tests are done, I stop Postgres in the usual way (Ctrl + C).\nThe recording is saved to the $HOME/.local/share/rr/ directory on my\nLinux distro -- rr creates a directory for each distinct recording in\nthis parent directory. rr also maintains a symlink (latest-trace) that\npoints to the latest recording directory, which I rely on most of the\ntime when replaying a recording (it's the default). I am careful to\nnot leave too many recordings around, since they're large enough that\nthat could become a concern.\n\nThe record/Postgres terminal has output that looks like this:\n\n[rr 1786705 1241867]2020-04-04 21:55:05.018 PDT 1786705 DEBUG:\nCommitTransaction(1) name: unnamed; blockState: STARTED; state:\nINPROGRESS, xid/subid/cid: 63992/1/2\n[rr 1786705 1241898]2020-04-04 21:55:05.019 PDT 1786705 DEBUG:\nStartTransaction(1) name: unnamed; blockState: DEFAULT; state:\nINPROGRESS, xid/subid/cid: 0/1/0\n[rr 1786705 1241902]2020-04-04 21:55:05.019 PDT 1786705 LOG:\nstatement: CREATE TYPE test_type_empty AS ();\n[rr 1786705 1241906]2020-04-04 21:55:05.020 PDT 1786705 DEBUG:\nCommitTransaction(1) name: unnamed; blockState: STARTED; state:\nINPROGRESS, xid/subid/cid: 63993/1/1\n[rr 1786705 1241936]2020-04-04 21:55:05.020 PDT 1786705 DEBUG:\nStartTransaction(1) name: unnamed; blockState: DEFAULT; state:\nINPROGRESS, xid/subid/cid: 0/1/0\n[rr 1786705 1241940]2020-04-04 21:55:05.020 PDT 1786705 LOG:\nstatement: DROP TYPE test_type_empty;\n[rr 1786705 1241944]2020-04-04 21:55:05.021 PDT 1786705 DEBUG: drop\nauto-cascades to composite type test_type_empty\n[rr 1786705 1241948]2020-04-04 21:55:05.021 PDT 1786705 DEBUG: drop\nauto-cascades to type test_type_empty[]\n[rr 1786705 1241952]2020-04-04 21:55:05.021 PDT 1786705 DEBUG:\nMultiXact: setting OldestMember[2] = 9\n[rr 1786705 1241956]2020-04-04 21:55:05.021 PDT 1786705 DEBUG:\nCommitTransaction(1) name: unnamed; blockState: STARTED; state:\nINPROGRESS, xid/subid/cid: 63994/1/3\n\nThe part of each log line in square brackets comes from rr (since we\nused -M when recording) -- the first number is a PID, the second an\nevent number. I usually don't care about the PIDs, though, since the\nevent number alone unambiguously identifies a particular \"event\" in a\nparticular backend (rr recordings are single threaded, even when there\nare multiple threads or processes). Suppose I want to get to the\n\"CREATE TYPE test_type_empty AS ();\" query -- I can get to the end of\nthe query by replaying the recording with this option:\n\n$ rr replay -M -g 1241902\n\nReplaying the recording like this takes me to the point where the\nPostgres backend prints the log message at the end of executing the\nquery I mentioned -- I get a familiar gdb debug server (rr implements\na gdb backend). This isn't precisely the point of execution that\ninterests me, but it's close enough. I can easily set a breakpoint to\nthe precise function I'm interested in, and then \"reverse-continue\" to\nget there by going backwards.\n\nI can also find the point where a particular backend starts by using\nthe fork option instead. So for the PID 1786705, that would look like:\n\n$ rr replay -M -f 1786705\n\n(Don't try to use the similar -p option, since that starts a debug\nserver when the pid has been exec'd.)\n\nrr really shines when debugging things like tap tests, where there is\ncomplex scaffolding that may run multiple Postgres servers. You can\nrun an entire \"rr record make check\", without having to worry about\nhow that scaffolding works. Once you have useful event numbers to work\noff of, it doesn't take too long to get an interactive debugging\nsession in the backend of interest by applying the same techniques.\n\nNote that saving the output of a recording using standard tools like\n\"tee\" seems to have some issues [1]. I've found it helpful to get log\noutput (complete with these event numbers) by doing an \"autopilot\"\nreplay, like this:\n\n$ rr replay -M -a &> rr.log\n\nThis may actually be required when running \"make installcheck\" or\nsomething, since there might be megabytes of log output. I usually\ndon't need to bother to generate logs in this way, though. It might\ntake a few minutes to do an autopilot replay, since rr will replay\neverything that was recorded in sub realtime.\n\nOne last tip: rr pack can be used to save a recording in a fairly\nstable format -- it copies the needed files into the trace:\n\n$ rr pack\n\nI haven't used this one yet. It seems like it would be useful if I\nwanted to save a recording for more than a day or two. Because every\nsingle detail of the recording (e.g. pointers, PIDs) is stable, it\nseems possible to treat a recording as a totally self contained thing.\n\nOther resources:\n\nhttps://github.com/mozilla/rr/wiki/Usage\nhttps://github.com/mozilla/rr/wiki/Debugging-protips\n\n[1] https://github.com/mozilla/rr/issues/91\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 5 Apr 2020 20:35:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "Hi,\n\nOn 2020-04-05 20:35:50 -0700, Peter Geoghegan wrote:\n> I have found that it's useful to use rr to debug Postgres by following\n> certain recipes. I'll share some of the details now, in case anybody\n> else wants to start using rr and isn't sure where to start.\n\nPerhaps put it on a wiki page?\n\n> I have a script that records a postgres session using rr with these options:\n> \n> rr record -M /code/postgresql/$BRANCH/install/bin/postgres \\\n> -D /code/postgresql/$BRANCH/data \\\n> --log_line_prefix=\"%m %p \" \\\n> --autovacuum=off \\\n\nWere you doing this because of occasional failures in autovacuum\nworkers? If so, that shouldn't be necessary after the stack base change\n(previously workers IIRC also could start with the wrong stack base -\nbut didn't end up checking stack depth except for expression indexes).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 5 Apr 2020 20:56:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" }, { "msg_contents": "On Sun, Apr 5, 2020 at 8:56 PM Andres Freund <andres@anarazel.de> wrote:\n> Perhaps put it on a wiki page?\n\nI added a new major section to the \"getting a stack trace\" wiki page:\n\nhttps://wiki.postgresql.org/wiki/Getting_a_stack_trace_of_a_running_PostgreSQL_backend_on_Linux/BSD#Recording_Postgres_using_rr_Record_and_Replay_Framework\n\nFeel free to add to and edit this section yourself.\n\n> Were you doing this because of occasional failures in autovacuum\n> workers? If so, that shouldn't be necessary after the stack base change\n> (previously workers IIRC also could start with the wrong stack base -\n> but didn't end up checking stack depth except for expression indexes).\n\nNo, just a personal preference for things like this.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 5 Apr 2020 21:40:34 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Reinitialize stack base after fork (for the benefit of rr)?" } ]
[ { "msg_contents": "Hi,\nCan someone check if there is a copy and paste error, at file:\n\\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n\nint num_mcv = stats->attr->attstattarget;\nint num_bins = stats->attr->attstattarget;\n\nIf they really are the same values, it could be changed to:\n\nint num_mcv = stats->attr->attstattarget;\nint num_bins = num_mcv;\n\nTo silence this alert.\n\nbest regards,\nRanier Vilela\n\nHi,Can someone check if there is a copy and paste error, at file:\\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\tint\t\t\tnum_mcv = stats->attr->attstattarget;\tint\t\t\tnum_bins = stats->attr->attstattarget;If they really are the same values, it could be changed to:\tint\t\t\tnum_mcv = stats->attr->attstattarget;\tint\t\t\tnum_bins = num_mcv;To silence this alert.best regards,Ranier Vilela", "msg_date": "Fri, 27 Mar 2020 19:11:17 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Can someone check if there is a copy and paste error, at file:\n> \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n> int num_mcv = stats->attr->attstattarget;\n> int num_bins = stats->attr->attstattarget;\n\nNo, that's intentional I believe. Those are independent variables\nthat just happen to start out with the same value.\n\n> If they really are the same values, it could be changed to:\n\n> int num_mcv = stats->attr->attstattarget;\n> int num_bins = num_mcv;\n\nThat would make it look like they are interdependent, which they are not.\n\n> To silence this alert.\n\nIf you have a tool that complains about that coding, I think the\ntool needs a solid whack upside the head. There's nothing wrong\nwith the code, and it clearly expresses the intent, which the other\nway doesn't. (Or in other words: it's the compiler's job to\noptimize away the duplicate fetch. Not the programmer's.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Mar 2020 19:49:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "Em sex., 27 de mar. de 2020 às 20:49, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Can someone check if there is a copy and paste error, at file:\n> > \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n> > int num_mcv = stats->attr->attstattarget;\n> > int num_bins = stats->attr->attstattarget;\n>\n> No, that's intentional I believe. Those are independent variables\n> that just happen to start out with the same value.\n>\nNeither you nor I can say with 100% certainty that the original author's\nintention.\n\n>\n> > If they really are the same values, it could be changed to:\n>\n> > int num_mcv = stats->attr->attstattarget;\n> > int num_bins = num_mcv;\n>\n> That would make it look like they are interdependent, which they are not.\n>\n> That's exactly why, instead of proposing a patch, I asked a question.\n\n\n> > To silence this alert.\n>\n> If you have a tool that complains about that coding, I think the\n> tool needs a solid whack upside the head. There's nothing wrong\n> with the code, and it clearly expresses the intent, which the other\n> way doesn't. (Or in other words: it's the compiler's job to\n> optimize away the duplicate fetch. Not the programmer's.)\n>\nI completely disagree. My tools have proven their worth, including finding\nserious errors in the code, which fortunately have been fixed by other\ncommitters.\nWhen issuing this alert, the tool does not value judgment regarding\nperformance or optimization, but it does an excellent job of finding\nsimilar patterns in adjacent lines, and the only thing it asked for was to\nbe asked if this was really the case. original author's intention.\n\nregards,\nRanier Vilela\n\nEm sex., 27 de mar. de 2020 às 20:49, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Can someone check if there is a copy and paste error, at file:\n> \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n> int num_mcv = stats->attr->attstattarget;\n> int num_bins = stats->attr->attstattarget;\n\nNo, that's intentional I believe.  Those are independent variables\nthat just happen to start out with the same value.Neither you nor I can say with 100% certainty that the original author's intention. \n\n> If they really are the same values, it could be changed to:\n\n> int num_mcv = stats->attr->attstattarget;\n> int num_bins = num_mcv;\n\nThat would make it look like they are interdependent, which they are not.\nThat's exactly why, instead of proposing a patch, I asked a question. \n> To silence this alert.\n\nIf you have a tool that complains about that coding, I think the\ntool needs a solid whack upside the head.  There's nothing wrong\nwith the code, and it clearly expresses the intent, which the other\nway doesn't.  (Or in other words: it's the compiler's job to\noptimize away the duplicate fetch.  Not the programmer's.)I completely disagree. My tools have proven their worth, including finding serious errors in the code, which fortunately have been fixed by other committers.When issuing this alert, the tool does not value judgment regarding performance or optimization, but it does an excellent job of finding similar patterns in adjacent lines, and the only thing it asked for was to be asked if this was really the case. original author's intention.regards,Ranier Vilela", "msg_date": "Sat, 28 Mar 2020 07:48:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "On Sat, Mar 28, 2020 at 07:48:22AM -0300, Ranier Vilela wrote:\n> I completely disagree. My tools have proven their worth, including finding\n> serious errors in the code, which fortunately have been fixed by other\n> committers.\n\nFWIW, I think that the rule to always take Coverity's reports with a\npinch of salt applies for any report. \n\n> When issuing this alert, the tool does not value judgment regarding\n> performance or optimization, but it does an excellent job of finding\n> similar patterns in adjacent lines, and the only thing it asked for was to\n> be asked if this was really the case. original author's intention.\n\nThe code context matters a lot, but here let's leave this code alone.\nThere is nothing wrong with it.\n--\nMichael", "msg_date": "Mon, 30 Mar 2020 17:15:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "On Sat, Mar 28, 2020 at 11:49 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 27 de mar. de 2020 às 20:49, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > Can someone check if there is a copy and paste error, at file:\n>> > \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n>> > int num_mcv = stats->attr->attstattarget;\n>> > int num_bins = stats->attr->attstattarget;\n>>\n>> No, that's intentional I believe. Those are independent variables\n>> that just happen to start out with the same value.\n>\n> Neither you nor I can say with 100% certainty that the original author's intention.\n\nGiven that Tom is the original author, I think it's a lot more likely\nthat he knows what the original authors intention was. It's certainly\nbeen a few years, so it probably isn't 100%, but the likelihood is\npretty good.\n\n\n>> > To silence this alert.\n>>\n>> If you have a tool that complains about that coding, I think the\n>> tool needs a solid whack upside the head. There's nothing wrong\n>> with the code, and it clearly expresses the intent, which the other\n>> way doesn't. (Or in other words: it's the compiler's job to\n>> optimize away the duplicate fetch. Not the programmer's.)\n>\n> I completely disagree. My tools have proven their worth, including finding serious errors in the code, which fortunately have been fixed by other committers.\n> When issuing this alert, the tool does not value judgment regarding performance or optimization, but it does an excellent job of finding similar patterns in adjacent lines, and the only thing it asked for was to be asked if this was really the case. original author's intention.\n\nAll tools will give false positives. This simply seems one of those --\nit certainly could have been indicating a problem, but in this case it\ndidn't.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Mon, 30 Mar 2020 11:06:45 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "Em seg., 30 de mar. de 2020 às 05:16, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Sat, Mar 28, 2020 at 07:48:22AM -0300, Ranier Vilela wrote:\n> > I completely disagree. My tools have proven their worth, including\n> finding\n> > serious errors in the code, which fortunately have been fixed by other\n> > committers.\n>\n> FWIW, I think that the rule to always take Coverity's reports with a\n> pinch of salt applies for any report.\n>\nI have certainly taken this advice seriously, since I have received all\nkinds of say, \"words of discouragement\".\nI understand perfectly that the list is very busy and perhaps the patience\nwith mistakes is very little, but these attitudes do not help new people to\nwork here.\nI don't get paid to work with PostgreSQL, so consideration and recognition\nare the only rewards for now.\n\n\n>\n> > When issuing this alert, the tool does not value judgment regarding\n> > performance or optimization, but it does an excellent job of finding\n> > similar patterns in adjacent lines, and the only thing it asked for was\n> to\n> > be asked if this was really the case. original author's intention.\n>\n> The code context matters a lot, but here let's leave this code alone.\n> There is nothing wrong with it.\n>\nThat is the question. Looking only at the code, there is no way to know\nimmediately, that there is nothing wrong. Not even a comment warning.\nThat's what the tool asked for, ask if there's really nothing wrong.\n\nregards,\nRanier Vilela\n\nEm seg., 30 de mar. de 2020 às 05:16, Michael Paquier <michael@paquier.xyz> escreveu:On Sat, Mar 28, 2020 at 07:48:22AM -0300, Ranier Vilela wrote:\n> I completely disagree. My tools have proven their worth, including finding\n> serious errors in the code, which fortunately have been fixed by other\n> committers.\n\nFWIW, I think that the rule to always take Coverity's reports with a\npinch of salt applies for any report.  I have certainly taken this advice seriously, since I have received all kinds of say, \"words of discouragement\".I understand perfectly that the list is very busy and perhaps the patience with mistakes is very little, but these attitudes do not help new people to work here.I don't get paid to work with PostgreSQL, so consideration and recognition are the only rewards for now. \n\n> When issuing this alert, the tool does not value judgment regarding\n> performance or optimization, but it does an excellent job of finding\n> similar patterns in adjacent lines, and the only thing it asked for was to\n> be asked if this was really the case. original author's intention.\n\nThe code context matters a lot, but here let's leave this code alone.\nThere is nothing wrong with it.That is the question. Looking only at the code, there is no way to know immediately, that there is nothing wrong. Not even a comment warning.That's what the tool asked for, ask if there's really nothing wrong. regards,Ranier Vilela", "msg_date": "Mon, 30 Mar 2020 09:25:27 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" }, { "msg_contents": "Em seg., 30 de mar. de 2020 às 06:06, Magnus Hagander <magnus@hagander.net>\nescreveu:\n\n> On Sat, Mar 28, 2020 at 11:49 AM Ranier Vilela <ranier.vf@gmail.com>\n> wrote:\n> >\n> > Em sex., 27 de mar. de 2020 às 20:49, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n> >>\n> >> Ranier Vilela <ranier.vf@gmail.com> writes:\n> >> > Can someone check if there is a copy and paste error, at file:\n> >> > \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n> >> > int num_mcv = stats->attr->attstattarget;\n> >> > int num_bins = stats->attr->attstattarget;\n> >>\n> >> No, that's intentional I believe. Those are independent variables\n> >> that just happen to start out with the same value.\n> >\n> > Neither you nor I can say with 100% certainty that the original author's\n> intention.\n>\n> Given that Tom is the original author, I think it's a lot more likely\n> that he knows what the original authors intention was. It's certainly\n> been a few years, so it probably isn't 100%, but the likelihood is\n> pretty good.\n>\nOf course, now we all know..\n\n\n>\n>\n> >> > To silence this alert.\n> >>\n> >> If you have a tool that complains about that coding, I think the\n> >> tool needs a solid whack upside the head. There's nothing wrong\n> >> with the code, and it clearly expresses the intent, which the other\n> >> way doesn't. (Or in other words: it's the compiler's job to\n> >> optimize away the duplicate fetch. Not the programmer's.)\n> >\n> > I completely disagree. My tools have proven their worth, including\n> finding serious errors in the code, which fortunately have been fixed by\n> other committers.\n> > When issuing this alert, the tool does not value judgment regarding\n> performance or optimization, but it does an excellent job of finding\n> similar patterns in adjacent lines, and the only thing it asked for was to\n> be asked if this was really the case. original author's intention.\n>\n> All tools will give false positives. This simply seems one of those --\n> it certainly could have been indicating a problem, but in this case it\n> didn't.\n>\nthat's what you said, it could be a big problem, if it were the case of\ncopy-past error.\nI do not consider it a false positive, since the tool did not claim it was\na bug, she warned and asked to question.\n\nregards,\nRanier Vilela\n\nEm seg., 30 de mar. de 2020 às 06:06, Magnus Hagander <magnus@hagander.net> escreveu:On Sat, Mar 28, 2020 at 11:49 AM Ranier Vilela <ranier.vf@gmail.com> wrote:\n>\n> Em sex., 27 de mar. de 2020 às 20:49, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n>>\n>> Ranier Vilela <ranier.vf@gmail.com> writes:\n>> > Can someone check if there is a copy and paste error, at file:\n>> > \\usr\\backend\\commands\\analyze.c, at lines 2225 and 2226?\n>> > int num_mcv = stats->attr->attstattarget;\n>> > int num_bins = stats->attr->attstattarget;\n>>\n>> No, that's intentional I believe.  Those are independent variables\n>> that just happen to start out with the same value.\n>\n> Neither you nor I can say with 100% certainty that the original author's intention.\n\nGiven that Tom is the original author, I think it's a lot more likely\nthat he knows what the original authors intention was. It's certainly\nbeen a few years, so it probably isn't 100%, but the likelihood is\npretty good.Of course, now we all know.. \n\n\n>> > To silence this alert.\n>>\n>> If you have a tool that complains about that coding, I think the\n>> tool needs a solid whack upside the head.  There's nothing wrong\n>> with the code, and it clearly expresses the intent, which the other\n>> way doesn't.  (Or in other words: it's the compiler's job to\n>> optimize away the duplicate fetch.  Not the programmer's.)\n>\n> I completely disagree. My tools have proven their worth, including finding serious errors in the code, which fortunately have been fixed by other committers.\n> When issuing this alert, the tool does not value judgment regarding performance or optimization, but it does an excellent job of finding similar patterns in adjacent lines, and the only thing it asked for was to be asked if this was really the case. original author's intention.\n\nAll tools will give false positives. This simply seems one of those --\nit certainly could have been indicating a problem, but in this case it\ndidn't.that's what you said, it could be a big problem, if it were the case of copy-past error.I do not consider it a false positive, since the tool did not claim it was a bug, she warned and asked to question. regards,Ranier Vilela", "msg_date": "Mon, 30 Mar 2020 09:29:51 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible copy and past error? (\\usr\\backend\\commands\\analyze.c)" } ]
[ { "msg_contents": "Hi\n\nI am playing with pspg and inotify support. It is working pretty well and\nnow can be nice if forwarding to output file can be configured little bit\nmore. Now, only append mode is supported. But append mode doesn't work with\npspg pager. So I propose new pset option \"file_output_mode\" with two\npossible values \"append\" (default, current behave) and \"rewrite\" (new mode).\n\nUsage:\n\n\\pset file_ouput_mode rewrite\n\nIn this mode, the file is opened before printing and it is closed after\nprinting.\n\nWhat do you think about this proposal?\n\nRegards\n\nPavel\n\nHiI am playing with pspg and inotify support. It is working pretty well and now can be nice if forwarding to output file can be configured little bit more. Now, only append mode is supported. But append mode doesn't work with pspg pager. So I propose new pset option \"file_output_mode\" with two possible values \"append\" (default, current behave) and \"rewrite\" (new mode).Usage:\\pset file_ouput_mode rewriteIn this mode, the file is opened before printing and it is closed after printing. What do you think about this proposal?RegardsPavel", "msg_date": "Sat, 28 Mar 2020 06:30:06 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "proposal - psql output file write mode" }, { "msg_contents": "so 28. 3. 2020 v 6:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> I am playing with pspg and inotify support. It is working pretty well and\n> now can be nice if forwarding to output file can be configured little bit\n> more. Now, only append mode is supported. But append mode doesn't work with\n> pspg pager. So I propose new pset option \"file_output_mode\" with two\n> possible values \"append\" (default, current behave) and \"rewrite\" (new mode).\n>\n> Usage:\n>\n> \\pset file_ouput_mode rewrite\n>\n> In this mode, the file is opened before printing and it is closed after\n> printing.\n>\n> What do you think about this proposal?\n>\n\nI tried to implement this feature and it doesn't look like good idea. There\nis not trivial implementation, and looks so costs are higher than benefits.\n\nPavel\n\n\n> Regards\n>\n> Pavel\n>\n>\n>\n\nso 28. 3. 2020 v 6:30 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:HiI am playing with pspg and inotify support. It is working pretty well and now can be nice if forwarding to output file can be configured little bit more. Now, only append mode is supported. But append mode doesn't work with pspg pager. So I propose new pset option \"file_output_mode\" with two possible values \"append\" (default, current behave) and \"rewrite\" (new mode).Usage:\\pset file_ouput_mode rewriteIn this mode, the file is opened before printing and it is closed after printing. What do you think about this proposal?I tried to implement this feature and it doesn't look like good idea. There is not trivial implementation, and looks so costs are higher than benefits.PavelRegardsPavel", "msg_date": "Mon, 30 Mar 2020 09:43:04 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: proposal - psql output file write mode" } ]
[ { "msg_contents": "A patch for converting postgresql.conf.sample to \npostgresql.conf.sample.in . This feature allows you to manage the \ncontents of postgresql.conf.sample at the configure phase.\n\nUsage example:\n\n ./configure --enable-foo\n\n\nconfigure.in:\n\n foo_params=$(cat <<-END\n foo_param1 = on\n foo_param2 = 16\n END\n )\n AC_SUBST(foo_params)\n\n\npostgresql.conf.sample.in:\n\n @foo_params@\n\n\npostgresql.conf.sample:\n\n foo_param1 = on\n foo_param2 = 16\n\n--", "msg_date": "Sat, 28 Mar 2020 12:00:08 +0300", "msg_from": "i.taranov@postgrespro.ru", "msg_from_op": true, "msg_subject": "[PATCH] postgresql.conf.sample->postgresql.conf.sample.in" }, { "msg_contents": "On 2020-03-28 10:00, i.taranov@postgrespro.ru wrote:\n> A patch for converting postgresql.conf.sample to\n> postgresql.conf.sample.in . This feature allows you to manage the\n> contents of postgresql.conf.sample at the configure phase.\n> \n> Usage example:\n> \n> ./configure --enable-foo\n> \n> \n> configure.in:\n> \n> foo_params=$(cat <<-END\n> foo_param1 = on\n> foo_param2 = 16\n> END\n> )\n> AC_SUBST(foo_params)\n> \n> \n> postgresql.conf.sample.in:\n> \n> @foo_params@\n> \n> \n> postgresql.conf.sample:\n> \n> foo_param1 = on\n> foo_param2 = 16\n\nWhy do we need that? We already have the capability to make initdb edit \npostgresql.conf.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 28 Mar 2020 12:13:38 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgresql.conf.sample->postgresql.conf.sample.in" }, { "msg_contents": "This is usable for build installable postgresql.conf.SAMPLE. At the\nconfigure phase, it is possible to include / exclude parameters in the\nsample depending on the selected options (--enable - * / - disable- *\netc ..)\n\nOn Sat, Mar 28, 2020 at 2:21 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-03-28 10:00, i.taranov@postgrespro.ru wrote:\n> > A patch for converting postgresql.conf.sample to\n> > postgresql.conf.sample.in . This feature allows you to manage the\n> > contents of postgresql.conf.sample at the configure phase.\n> >\n> > Usage example:\n> >\n> > ./configure --enable-foo\n> >\n> >\n> > configure.in:\n> >\n> > foo_params=$(cat <<-END\n> > foo_param1 = on\n> > foo_param2 = 16\n> > END\n> > )\n> > AC_SUBST(foo_params)\n> >\n> >\n> > postgresql.conf.sample.in:\n> >\n> > @foo_params@\n> >\n> >\n> > postgresql.conf.sample:\n> >\n> > foo_param1 = on\n> > foo_param2 = 16\n>\n> Why do we need that? We already have the capability to make initdb edit\n> postgresql.conf.\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 28 Mar 2020 15:06:06 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgresql.conf.sample->postgresql.conf.sample.in" }, { "msg_contents": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru> writes:\n> This is usable for build installable postgresql.conf.SAMPLE. At the\n> configure phase, it is possible to include / exclude parameters in the\n> sample depending on the selected options (--enable - * / - disable- *\n> etc ..)\n\nI'm with Peter on this: you're proposing to complicate matters for\nno real gain.\n\nAs a former packager, I can readily imagine situations where somebody\nwants to adjust the initial contents of postgresql.conf compared to\nwhat's distributed --- I've done it myself. But anybody who's in that\nsituation has got lots of other tools they can use for the purpose\n(patch(1) being a pretty favorite one, since it can also apply other\nsorts of code changes). Even more to the point, they've probably got\nan existing process for this, which would be needlessly broken by\nrenaming the file as-distributed.\n\nAlso, of the various ways that one might inject a modification,\nediting the configure.in file and then having to re-autoconf is\none of the more painful ones, probably only exceeded by trying\nto maintain a patch against configure itself :-(\n\nAs far as the project's own internal needs go, we do already have\ncases where configure's choices need to feed into postgresql.conf, but\nhaving initdb do all the actual editing has worked out fine for that.\nI don't think splitting the responsibility between configure time and\ninitdb time would be an improvement --- for one thing, it'd be more\npainful not less so to deal with cases where considerations at both\nlevels affect the same postgresql.conf entries.\n\nSo if you want this proposal to go anywhere, you need a much more\nconcrete and compelling example of something for which this is the\nonly sane way to do it.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 11:54:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgresql.conf.sample->postgresql.conf.sample.in" }, { "msg_contents": "Patch - yes, a good way. but 1) requires invasion to the makefile 2)\nmakes changes in the file stored on git..\n\nin case postgresql.conf.sample.in is a template, there are no such\nproblems. and this does not bother those who if someone assumes the\nexistence of the postgres.conf.sample file\n\n>Even more to the point, they've probably got an existing process for this, which would be needlessly broken by renaming the file as-distributed.\n\n\nI agree, this is a serious reason not to do this, especially if the\nvendor stores changes in postgres.conf.samle in git\n\n> So if you want this proposal to go anywhere, you need a much more concrete and compelling example of something for which this is the only sane way to do it.\n\n\nThis feature seems usable for preparing a certain number of packages\nconsisting of different features. Each feature can have its own set of\nsample settings in postgres.conf.sample. In this case, using makefile\n+ patch is more ugly.\n\nIn any case, I am grateful for the answer and clarification!\n\n\n", "msg_date": "Sat, 28 Mar 2020 19:26:08 +0300", "msg_from": "\"Ivan N. Taranov\" <i.taranov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] postgresql.conf.sample->postgresql.conf.sample.in" } ]
[ { "msg_contents": "Hi,\n\nTheses variables, are assigned with values that never is used and, can\nsafely have their values removed.\n\nbest regards,\nRanier Vilela", "msg_date": "Sat, 28 Mar 2020 10:33:23 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH'] Variables assigned with values that is never used." }, { "msg_contents": "Em sáb., 28 de mar. de 2020 às 10:33, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Hi,\n>\n> Theses variables, are assigned with values that never is used and, can\n> safely have their values removed.\n>\n1.\nhttps://github.com/postgres/postgres/commit/f0ca378d4c139eda99ef14998115c1674dac3fc5\n\ndiff --git a/src/backend/access/nbtree/nbtsplitloc.c\nb/src/backend/access/nbtree/nbtsplitloc.c\nindex 8ba055be9e..15ac106525 100644\n--- a/src/backend/access/nbtree/nbtsplitloc.c\n+++ b/src/backend/access/nbtree/nbtsplitloc.c\n@@ -812,7 +812,6 @@ _bt_bestsplitloc(FindSplitData *state, int\nperfectpenalty,\n\n if (penalty <= perfectpenalty)\n {\n- bestpenalty = penalty;\n lowsplit = i;\n break;\n }\n\nCoincidence? I think not.\n\nregards,\nRanier Vilela\n\nEm sáb., 28 de mar. de 2020 às 10:33, Ranier Vilela <ranier.vf@gmail.com> escreveu:Hi,Theses variables, are assigned with values that never is used and, can safely have their values removed.1. https://github.com/postgres/postgres/commit/f0ca378d4c139eda99ef14998115c1674dac3fc5 diff --git a/src/backend/access/nbtree/nbtsplitloc.c b/src/backend/access/nbtree/nbtsplitloc.cindex 8ba055be9e..15ac106525 100644--- a/src/backend/access/nbtree/nbtsplitloc.c+++ b/src/backend/access/nbtree/nbtsplitloc.c@@ -812,7 +812,6 @@ _bt_bestsplitloc(FindSplitData *state, int perfectpenalty,  \t\tif (penalty <= perfectpenalty) \t\t{-\t\t\tbestpenalty = penalty; \t\t\tlowsplit = i; \t\t\tbreak; \t\t}Coincidence? I think not.regards,Ranier Vilela", "msg_date": "Thu, 16 Apr 2020 19:21:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH'] Variables assigned with values that is never used." }, { "msg_contents": "On Sat, Mar 28, 2020 at 10:33:23AM -0300, Ranier Vilela wrote:\n> Theses variables, are assigned with values that never is used and, can\n> safely have their values removed.\n\nI came across this one recently.\n\ncommit ccf85a5512fe7cfd76c6586b67fe06d911428d34\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Apr 23 21:54:27 2020 -0500\n\n unused variable found@AttrDefaultFetch()..\n \n since:\n commit 16828d5c0273b4fe5f10f42588005f16b415b2d8\n Author: Andrew Dunstan <andrew@dunslane.net>\n Date: Wed Mar 28 10:43:52 2018 +1030\n \n Fast ALTER TABLE ADD COLUMN with a non-NULL default\n\ndiff --git a/src/backend/utils/cache/relcache.c b/src/backend/utils/cache/relcache.c\nindex 9f1f11d0c1..f911d2802f 100644\n--- a/src/backend/utils/cache/relcache.c\n+++ b/src/backend/utils/cache/relcache.c\n@@ -4240,7 +4240,6 @@ AttrDefaultFetch(Relation relation)\n \tHeapTuple\thtup;\n \tDatum\t\tval;\n \tbool\t\tisnull;\n-\tint\t\t\tfound;\n \tint\t\t\ti;\n \n \tScanKeyInit(&skey,\n@@ -4251,7 +4250,6 @@ AttrDefaultFetch(Relation relation)\n \tadrel = table_open(AttrDefaultRelationId, AccessShareLock);\n \tadscan = systable_beginscan(adrel, AttrDefaultIndexId, true,\n \t\t\t\t\t\t\t\tNULL, 1, &skey);\n-\tfound = 0;\n \n \twhile (HeapTupleIsValid(htup = systable_getnext(adscan)))\n \t{\n@@ -4266,8 +4264,6 @@ AttrDefaultFetch(Relation relation)\n \t\t\t\telog(WARNING, \"multiple attrdef records found for attr %s of rel %s\",\n \t\t\t\t\t NameStr(attr->attname),\n \t\t\t\t\t RelationGetRelationName(relation));\n-\t\t\telse\n-\t\t\t\tfound++;\n \n \t\t\tval = fastgetattr(htup,\n \t\t\t\t\t\t\t Anum_pg_attrdef_adbin,\n\n\n\n", "msg_date": "Fri, 24 Apr 2020 09:45:01 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH'] Variables assigned with values that is never used." } ]
[ { "msg_contents": "Hi! Can I get some review on my GSoC proposal ?\n\nhttps://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing\n<https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing>\n\n\nThanks.\n\nHi! Can I get some review on my GSoC proposal ? https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing  Thanks.", "msg_date": "Sat, 28 Mar 2020 19:03:29 +0530", "msg_from": "Kartik Ohri <kartikohri13@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC Proposal" }, { "msg_contents": "Greetings,\n\n* Kartik Ohri (kartikohri13@gmail.com) wrote:\n> Hi! Can I get some review on my GSoC proposal ?\n> \n> https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing\n> <https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing>\n\nI recommend you chat with the mentor who is listed for the GSoC Project\nyou're writing a proposal for directly. Chip may see it here also, but\nbest to make sure.\n\nThanks!\n\nStephen", "msg_date": "Sun, 29 Mar 2020 15:02:33 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: GSoC Proposal" }, { "msg_contents": "Hi! I have shared the proposal with him directly. But as I was made aware\nthat the project's prospective mentors are not at much liberty to give\nfeedback on the proposal due to mentor guidelines, I wished if someone else\ncould give some feedback (even some general feedback for submitting a\nproposal to PostgreSQL organization would be appreciated).\n\nOn Mon, Mar 30, 2020, 12:32 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n> Greetings,\n>\n> * Kartik Ohri (kartikohri13@gmail.com) wrote:\n> > Hi! Can I get some review on my GSoC proposal ?\n> >\n> >\n> https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing\n> > <\n> https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing\n> >\n>\n> I recommend you chat with the mentor who is listed for the GSoC Project\n> you're writing a proposal for directly. Chip may see it here also, but\n> best to make sure.\n>\n> Thanks!\n>\n> Stephen\n>\n\nHi! I have shared the proposal with him directly. But as I was made aware that the project's prospective mentors are not at much liberty to give feedback on the proposal due to mentor guidelines, I wished if someone else could give some feedback (even some general feedback for submitting a proposal to PostgreSQL organization would be appreciated).On Mon, Mar 30, 2020, 12:32 AM Stephen Frost <sfrost@snowman.net> wrote:Greetings,\n\n* Kartik Ohri (kartikohri13@gmail.com) wrote:\n> Hi! Can I get some review on my GSoC proposal ?\n> \n> https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing\n> <https://docs.google.com/document/d/1EiIHZjOjf6yWfGzKeHCbu8bJ6K1tCEcmPsD3i8lPXbg/edit?usp=sharing>\n\nI recommend you chat with the mentor who is listed for the GSoC Project\nyou're writing a proposal for directly.  Chip may see it here also, but\nbest to make sure.\n\nThanks!\n\nStephen", "msg_date": "Mon, 30 Mar 2020 00:48:00 +0530", "msg_from": "Kartik Ohri <kartikohri13@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC Proposal" }, { "msg_contents": "On 03/29/20 15:18, Kartik Ohri wrote:\n> Hi! I have shared the proposal with him directly. But as I was made aware\n> that the project's prospective mentors are not at much liberty to give\n> feedback on the proposal due to mentor guidelines, I wished if someone else\n\nI'll add to that in case I could have misunderstood the mentor guidelines;\nwe have been in some communication about the parameters of the project,\nbut I was not sure how much involvement I should have in the preparation\nof the proposal itself.\n\nIs that in line with how other mentors for PostgreSQL have approached it?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Sun, 29 Mar 2020 17:33:45 -0400", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: GSoC Proposal" } ]
[ { "msg_contents": "Enable Unix-domain sockets support on Windows\n\nAs of Windows 10 version 1803, Unix-domain sockets are supported on\nWindows. But it's not automatically detected by configure because it\nlooks for struct sockaddr_un and Windows doesn't define that. So we\njust make our own definition on Windows and override the configure\nresult.\n\nSet DEFAULT_PGSOCKET_DIR to empty on Windows so by default no\nUnix-domain socket is used, because there is no good standard\nlocation.\n\nIn pg_upgrade, we have to do some extra tweaking to preserve the\nexisting behavior of not using Unix-domain sockets on Windows. Adding\nsupport would be desirable, but it needs further work, in particular a\nway to select whether to use Unix-domain sockets from the command-line\nor with a run-time test.\n\nThe pg_upgrade test script needs a fix. The previous code passed\n\"localhost\" to postgres -k, which only happened to work because\nWindows used to ignore the -k argument value altogether. We instead\nneed to pass an empty string to get the desired effect.\n\nThe test suites will continue to not use Unix-domain sockets on\nWindows. This requires a small tweak in pg_regress.c. The TAP tests\ndon't need to be changed because they decide by the operating system\nrather than HAVE_UNIX_SOCKETS.\n\nReviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>\nDiscussion: https://www.postgresql.org/message-id/flat/54bde68c-d134-4eb8-5bd3-8af33b72a010@2ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/8f3ec75de4060d86176ad4ac998eeb87a39748c2\n\nModified Files\n--------------\nconfig/c-library.m4 | 5 +++--\nconfigure | 5 ++++-\nsrc/bin/pg_upgrade/option.c | 4 ++--\nsrc/bin/pg_upgrade/server.c | 2 +-\nsrc/bin/pg_upgrade/test.sh | 11 ++++++-----\nsrc/include/c.h | 4 ++++\nsrc/include/pg_config.h.in | 6 +++---\nsrc/include/pg_config_manual.h | 15 ++++++++-------\nsrc/include/port/win32.h | 11 +++++++++++\nsrc/test/regress/pg_regress.c | 10 +++++++---\nsrc/tools/msvc/Solution.pm | 2 +-\n11 files changed, 50 insertions(+), 25 deletions(-)", "msg_date": "Sat, 28 Mar 2020 14:06:59 +0000", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On Sat, Mar 28, 2020 at 7:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> Enable Unix-domain sockets support on Windows\n>\n\n+\n+/*\n+ * Windows headers don't define this structure, but you can define it yourself\n+ * to use the functionality.\n+ */\n+struct sockaddr_un\n+{\n+ unsigned short sun_family;\n+ char sun_path[108];\n+};\n\nI was going through this feature and reading about Windows support for\nit. I came across a few links which suggest that this structure is\ndefined in <afunix.h>. Is there a reason for not using this via\nafunix.h?\n\n[1] - https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n[2] - https://gist.github.com/NZSmartie/079d8f894ee94f3035306cb23d49addc\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 26 Jun 2020 17:51:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On 2020-06-26 14:21, Amit Kapila wrote:\n> On Sat, Mar 28, 2020 at 7:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n>>\n>> Enable Unix-domain sockets support on Windows\n>>\n> \n> +\n> +/*\n> + * Windows headers don't define this structure, but you can define it yourself\n> + * to use the functionality.\n> + */\n> +struct sockaddr_un\n> +{\n> + unsigned short sun_family;\n> + char sun_path[108];\n> +};\n> \n> I was going through this feature and reading about Windows support for\n> it. I came across a few links which suggest that this structure is\n> defined in <afunix.h>. Is there a reason for not using this via\n> afunix.h?\n> \n> [1] - https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n> [2] - https://gist.github.com/NZSmartie/079d8f894ee94f3035306cb23d49addc\n\nIf we did it that way we'd have to write some kind of configuration-time \ncheck for the MSVC build, since not all Windows versions have that \nheader. Also, not all versions of MinGW have that header (possibly \nnone). So the current implementation is probably the most practical \ncompromise.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sat, 27 Jun 2020 11:36:10 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On Sat, Jun 27, 2020 at 3:06 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-26 14:21, Amit Kapila wrote:\n> > On Sat, Mar 28, 2020 at 7:37 PM Peter Eisentraut <peter@eisentraut.org> wrote:\n> >>\n> >> Enable Unix-domain sockets support on Windows\n> >>\n> >\n> > +\n> > +/*\n> > + * Windows headers don't define this structure, but you can define it yourself\n> > + * to use the functionality.\n> > + */\n> > +struct sockaddr_un\n> > +{\n> > + unsigned short sun_family;\n> > + char sun_path[108];\n> > +};\n> >\n> > I was going through this feature and reading about Windows support for\n> > it. I came across a few links which suggest that this structure is\n> > defined in <afunix.h>. Is there a reason for not using this via\n> > afunix.h?\n> >\n> > [1] - https://devblogs.microsoft.com/commandline/af_unix-comes-to-windows/\n> > [2] - https://gist.github.com/NZSmartie/079d8f894ee94f3035306cb23d49addc\n>\n> If we did it that way we'd have to write some kind of configuration-time\n> check for the MSVC build, since not all Windows versions have that\n> header. Also, not all versions of MinGW have that header (possibly\n> none). So the current implementation is probably the most practical\n> compromise.\n>\n\nFair enough, but what should be the behavior in the Windows versions\n(<10) where Unix-domain sockets are not supported? BTW, in which\nformat the path needs to be specified for unix_socket_directories? I\ntried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working,\nit gives me errors like: \"could not create lock file\n\"/c/tmp/.s.PGSQL.5432.lock\": No such file or directory\" on server\nstart. I am trying this on Win7 just to check what is the behavior of\nthis feature on it.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 27 Jun 2020 17:27:20 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On 2020-06-27 13:57, Amit Kapila wrote:\n> Fair enough, but what should be the behavior in the Windows versions\n> (<10) where Unix-domain sockets are not supported?\n\nYou get an error about an unsupported address family, similar to trying \nto use IPv6 on a system that doesn't support it.\n\n> BTW, in which\n> format the path needs to be specified for unix_socket_directories? I\n> tried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working,\n> it gives me errors like: \"could not create lock file\n> \"/c/tmp/.s.PGSQL.5432.lock\": No such file or directory\" on server\n> start. I am trying this on Win7 just to check what is the behavior of\n> this feature on it.\n\nHmm, the only thing I remember about this now is that you need to use \nnative Windows paths, meaning you can't just use /tmp under MSYS, but it \nneeds to be something like C:\\something. But the error you have there \nis not even about the socket file but about the lock file, which is a \nnormal file, so if that goes wrong, it might be an unrelated problem.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Sun, 28 Jun 2020 10:33:53 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On Sun, Jun 28, 2020 at 2:03 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-06-27 13:57, Amit Kapila wrote:\n> > BTW, in which\n> > format the path needs to be specified for unix_socket_directories? I\n> > tried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working,\n> > it gives me errors like: \"could not create lock file\n> > \"/c/tmp/.s.PGSQL.5432.lock\": No such file or directory\" on server\n> > start. I am trying this on Win7 just to check what is the behavior of\n> > this feature on it.\n>\n> Hmm, the only thing I remember about this now is that you need to use\n> native Windows paths, meaning you can't just use /tmp under MSYS, but it\n> needs to be something like C:\\something.\n>\n\nI have tried it by giving something like that.\nAfter giving path as unix_socket_directories = 'C:\\\\akapila', I get\nbelow errors on server start:\n2020-06-29 08:19:13.174 IST [4460] LOG: could not create Unix socket\nfor address \"C:/akapila/.s.PGSQL.5432\": An address incompatible with\nthe request\ned protocol was used.\n2020-06-29 08:19:13.205 IST [4460] WARNING: could not create\nUnix-domain socket in directory \"C:/akapila\"\n2020-06-29 08:19:13.205 IST [4460] FATAL: could not create any\nUnix-domain sockets\n2020-06-29 08:19:13.221 IST [4460] LOG: database system is shut down\n\nAfter giving path as unix_socket_directories = 'C:\\akapila', I get\nbelow errors on server start:\n2020-06-29 08:24:11.861 IST [4808] FATAL: could not create lock file\n\"C:akapila/.s.PGSQL.5432.lock\": No such file or directory\n2020-06-29 08:24:11.877 IST [4808] LOG: database system is shut down\n\n> But the error you have there\n> is not even about the socket file but about the lock file, which is a\n> normal file, so if that goes wrong, it might be an unrelated problem.\n>\n\nYeah, but as I am trying this on Win7 machine, I was expecting an\nerror similar to what you were saying: \"unsupported address family\n...\". It seems this error occurred after passing that phase. I am not\nsure what is going on here and maybe it is not important as well\nbecause we don't support this feature on Win7 but probably an\nappropriate error message would have been good.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 29 Jun 2020 09:04:25 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "\nOn 6/28/20 4:33 AM, Peter Eisentraut wrote:\n> On 2020-06-27 13:57, Amit Kapila wrote:\n>> Fair enough, but what should be the behavior in the Windows versions\n>> (<10) where Unix-domain sockets are not supported?\n>\n> You get an error about an unsupported address family, similar to\n> trying to use IPv6 on a system that doesn't support it.\n>\n>> BTW, in which\n>> format the path needs to be specified for unix_socket_directories?  I\n>> tried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working,\n>> it gives me errors like: \"could not create lock file\n>> \"/c/tmp/.s.PGSQL.5432.lock\": No such file or directory\" on server\n>> start.  I am trying this on Win7 just to check what is the behavior of\n>> this feature on it.\n>\n> Hmm, the only thing I remember about this now is that you need to use\n> native Windows paths, meaning you can't just use /tmp under MSYS, but\n> it needs to be something like C:\\something.  But the error you have\n> there is not even about the socket file but about the lock file, which\n> is a normal file, so if that goes wrong, it might be an unrelated\n> problem.\n>\n\n\nIt needs to be a path from the Windows POV, not an Msys virtualized\npath. So c:/tmp or just /tmp should work, but /c/tmp or similar probably\nwill not. The directory needs to exist. I just checked that this is\nworking, both in postgresql.conf and on the psql command line.\n\n\ncheers\n\n\nandrew\n\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Mon, 29 Jun 2020 11:17:58 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "On Mon, Jun 29, 2020 at 8:48 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n>\n>\n> On 6/28/20 4:33 AM, Peter Eisentraut wrote:\n> > On 2020-06-27 13:57, Amit Kapila wrote:\n> >> Fair enough, but what should be the behavior in the Windows versions\n> >> (<10) where Unix-domain sockets are not supported?\n> >\n> > You get an error about an unsupported address family, similar to\n> > trying to use IPv6 on a system that doesn't support it.\n> >\n> >> BTW, in which\n> >> format the path needs to be specified for unix_socket_directories? I\n> >> tried with '/c/tmp', 'c:/tmp', 'tmp' but nothing seems to be working,\n> >> it gives me errors like: \"could not create lock file\n> >> \"/c/tmp/.s.PGSQL.5432.lock\": No such file or directory\" on server\n> >> start. I am trying this on Win7 just to check what is the behavior of\n> >> this feature on it.\n> >\n> > Hmm, the only thing I remember about this now is that you need to use\n> > native Windows paths, meaning you can't just use /tmp under MSYS, but\n> > it needs to be something like C:\\something. But the error you have\n> > there is not even about the socket file but about the lock file, which\n> > is a normal file, so if that goes wrong, it might be an unrelated\n> > problem.\n> >\n>\n>\n> It needs to be a path from the Windows POV, not an Msys virtualized\n> path. So c:/tmp or just /tmp should work, but /c/tmp or similar probably\n> will not. The directory needs to exist. I just checked that this is\n> working, both in postgresql.conf and on the psql command line.\n>\n\nOkay, thanks for the verification. I was trying to see its behavior\non Win7 or a similar environment where this feature is not supported\nto see if we get the appropriate error message. If by any chance, you\nhave access to such an environment, then it might be worth trying on\nsuch an environment once.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 30 Jun 2020 09:43:03 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" }, { "msg_contents": "\nOn 6/30/20 12:13 AM, Amit Kapila wrote:\n> On Mon, Jun 29, 2020 at 8:48 PM Andrew Dunstan\n> <andrew.dunstan@2ndquadrant.com> wrote:\n>>\n>>\n>>\n>> It needs to be a path from the Windows POV, not an Msys virtualized\n>> path. So c:/tmp or just /tmp should work, but /c/tmp or similar probably\n>> will not. The directory needs to exist. I just checked that this is\n>> working, both in postgresql.conf and on the psql command line.\n>>\n> Okay, thanks for the verification. I was trying to see its behavior\n> on Win7 or a similar environment where this feature is not supported\n> to see if we get the appropriate error message. If by any chance, you\n> have access to such an environment, then it might be worth trying on\n> such an environment once.\n>\n\n\nI haven't had a working Windows 7 environment for quite some years.\n\n\ncheers\n\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n", "msg_date": "Tue, 30 Jun 2020 19:06:24 -0400", "msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Enable Unix-domain sockets support on Windows" } ]
[ { "msg_contents": "Hi,\n\nheap_abort_speculative() does:\n\t/*\n\t * The tuple will become DEAD immediately. Flag that this page\n\t * immediately is a candidate for pruning by setting xmin to\n\t * RecentGlobalXmin. That's not pretty, but it doesn't seem worth\n\t * inventing a nicer API for this.\n\t */\n\tAssert(TransactionIdIsValid(RecentGlobalXmin));\n\tPageSetPrunable(page, RecentGlobalXmin);\n\nbut that doesn't seem right to me. RecentGlobalXmin could very well be\nolder than the table's relfrozenxid. Especially when multiple databases\nare used, or logical replication is active, it's not unlikely at all.\n\nThat's because RecentGlobalXmin is a) the minimum xmin of all databases,\nwhereas horizon computations for relations are done for only the current\ndatabase b) RecentGlobalXmin may have been computed a while ago (when\nthe snapshot for the transaction was computed, for example), but a\nconcurrent vacuum could be more recent c) RecentGlobalXmin includes the\nmore \"pessimistic\" xmin for catalog relations.\n\nUnless somebody has a better idea for how to solve this in a\nback-paptchable way, I think it'd be best to replace RecentGlobalXmin\nwith RecentXmin. That'd be safe as far as I can see.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 28 Mar 2020 14:30:23 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Potential (low likelihood) wraparound hazard in\n heap_abort_speculative()" }, { "msg_contents": "On Sat, Mar 28, 2020 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n> Unless somebody has a better idea for how to solve this in a\n> back-paptchable way, I think it'd be best to replace RecentGlobalXmin\n> with RecentXmin. That'd be safe as far as I can see.\n\nAs far as I can tell, the worst consequence of this wraparound hazard\nis that we don't opportunistically prune at some later point where we\nprobably ought to. Do you agree with that assessment?\n\nSince pd_prune_xid is documented as \"a hint field\" in bufpage.h, this\nbug cannot possibly lead to queries that give wrong answers. The\nperformance issue also seems like it should not have much impact,\nsince we only call heap_abort_speculative() in extreme cases where\nthere is a lot of contention among concurrent upserting sessions.\nAlso, as you pointed out already, RecentGlobalXmin is probably not\ngoing to be any different to RecentXmin.\n\nI am in favor of fixing the issue, and backpatching all the way. I\njust want to put the issue in perspective, and have my own\nunderstanding of things verified.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 29 Mar 2020 15:20:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Potential (low likelihood) wraparound hazard in\n heap_abort_speculative()" }, { "msg_contents": "Hi,\n\nOn 2020-03-29 15:20:01 -0700, Peter Geoghegan wrote:\n> On Sat, Mar 28, 2020 at 2:30 PM Andres Freund <andres@anarazel.de> wrote:\n> > Unless somebody has a better idea for how to solve this in a\n> > back-paptchable way, I think it'd be best to replace RecentGlobalXmin\n> > with RecentXmin. That'd be safe as far as I can see.\n> \n> As far as I can tell, the worst consequence of this wraparound hazard\n> is that we don't opportunistically prune at some later point where we\n> probably ought to. Do you agree with that assessment?\n\nProbably, yes.\n\n\n> Since pd_prune_xid is documented as \"a hint field\" in bufpage.h, this\n> bug cannot possibly lead to queries that give wrong answers. The\n> performance issue also seems like it should not have much impact,\n> since we only call heap_abort_speculative() in extreme cases where\n> there is a lot of contention among concurrent upserting sessions.\n\nWell, I think it could be fairly \"persistent\" in being set in some\ncases. PageSetPrunable() and heap_prune_record_prunable() check that a\nnew prune xid is newer than the current one.\n\nThat said, I still think it's unlikely to be really problematic.\n\n\n\n> Also, as you pointed out already, RecentGlobalXmin is probably not\n> going to be any different to RecentXmin.\n\nHuh, I think they very commonly are radically different? Where did I\npoint that out? RecentXmin is the xmin of the last snapshot\ncomputed. Whereas RecentGlobalXmin basically is the oldest xmin of any\nbackend. That's a pretty large difference? Especially with longrunning\nsessions, replication, logical decoding those can be different by\nhundreds of millions of xids.\n\nWhere did I point out that they're not going to be very different?\n\n\n> I am in favor of fixing the issue, and backpatching all the way. I\n> just want to put the issue in perspective, and have my own\n> understanding of things verified.\n\nCool.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Mar 2020 15:31:45 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Potential (low likelihood) wraparound hazard in\n heap_abort_speculative()" } ]
[ { "msg_contents": "Hi,\nThis patch fixes some redundant initilization, that are safe to remove.\n\nbest regards,\nRanier Vilela", "msg_date": "Sat, 28 Mar 2020 19:04:00 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Redudant initilization" }, { "msg_contents": "Hello.\n\nAt Sat, 28 Mar 2020 19:04:00 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> This patch fixes some redundant initilization, that are safe to remove.\n\n> diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c\n> index d3f3a7b803..ffaa2b1ab4 100644\n> --- a/src/backend/access/gist/gistxlog.c\n> +++ b/src/backend/access/gist/gistxlog.c\n> @@ -396,7 +396,7 @@ gistRedoPageReuse(XLogReaderState *record)\n> \tif (InHotStandby)\n> \t{\n> \t\tFullTransactionId latestRemovedFullXid = xlrec->latestRemovedFullXid;\n> -\t\tFullTransactionId nextFullXid = ReadNextFullTransactionId();\n> +\t\tFullTransactionId nextFullXid;\n\nI'd prefer to preserve this and remove the latter.\n\n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index 9d9e915979..795cf349eb 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -3396,7 +3396,7 @@ List *\n> heap_truncate_find_FKs(List *relationIds)\n> {\n> \tList\t *result = NIL;\n> -\tList\t *oids = list_copy(relationIds);\n> +\tList\t *oids;\n\nThis was just a waste of memory, the fix looks fine.\n\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index c5b771c531..37fbeef841 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -730,9 +730,11 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n> BlockNumber\n> mdnblocks(SMgrRelation reln, ForkNumber forknum)\n> {\n> -\tMdfdVec *v = mdopenfork(reln, forknum, EXTENSION_FAIL);\n> +\tMdfdVec *v;\n> \tBlockNumber nblocks;\n> -\tBlockNumber segno = 0;\n> +\tBlockNumber segno;\n> +\n> + mdopenfork(reln, forknum, EXTENSION_FAIL);\n> \n> \t/* mdopen has opened the first segment */\n> \tAssert(reln->md_num_open_segs[forknum] > 0);\n\nIt doesn't seems *to me* an issue.\n\n> diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c\n> index 8bb00abb6b..7a6a2ecbe9 100644\n> --- a/src/backend/utils/adt/json.c\n> +++ b/src/backend/utils/adt/json.c\n> @@ -990,7 +990,7 @@ catenate_stringinfo_string(StringInfo buffer, const char *addon)\n> Datum\n> json_build_object(PG_FUNCTION_ARGS)\n> {\n> -\tint\t\t\tnargs = PG_NARGS();\n> +\tint\t\t\tnargs;\n\nThis part looks fine.\n\n> \tint\t\t\ti;\n> \tconst char *sep = \"\";\n> \tStringInfo\tresult;\n> @@ -998,6 +998,8 @@ json_build_object(PG_FUNCTION_ARGS)\n> \tbool\t *nulls;\n> \tOid\t\t *types;\n> \n> + PG_NARGS();\n> +\n> \t/* fetch argument values to build the object */\n> \tnargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls);\n\nPG_NARGS() doesn't have a side-effect so no need to call independently.\n\n\n> diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c\n> index 9e24fec72d..fb0e833b2d 100644\n> --- a/src/backend/utils/mmgr/mcxt.c\n> +++ b/src/backend/utils/mmgr/mcxt.c\n> @@ -475,7 +475,7 @@ MemoryContextMemAllocated(MemoryContext context, bool recurse)\n> \n> \tif (recurse)\n> \t{\n> -\t\tMemoryContext child = context->firstchild;\n> +\t\tMemoryContext child;\n> \n> \t\tfor (child = context->firstchild;\n> \t\t\t child != NULL;\n\nThis looks fine.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 30 Mar 2020 09:57:01 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em dom., 29 de mar. de 2020 às 21:57, Kyotaro Horiguchi <\nhorikyota.ntt@gmail.com> escreveu:\n\n> Hello.\n>\n> At Sat, 28 Mar 2020 19:04:00 -0300, Ranier Vilela <ranier.vf@gmail.com>\n> wrote in\n> > Hi,\n> > This patch fixes some redundant initilization, that are safe to remove.\n>\n> > diff --git a/src/backend/access/gist/gistxlog.c\n> b/src/backend/access/gist/gistxlog.c\n> > index d3f3a7b803..ffaa2b1ab4 100644\n> > --- a/src/backend/access/gist/gistxlog.c\n> > +++ b/src/backend/access/gist/gistxlog.c\n> > @@ -396,7 +396,7 @@ gistRedoPageReuse(XLogReaderState *record)\n> > if (InHotStandby)\n> > {\n> > FullTransactionId latestRemovedFullXid =\n> xlrec->latestRemovedFullXid;\n> > - FullTransactionId nextFullXid =\n> ReadNextFullTransactionId();\n> > + FullTransactionId nextFullXid;\n>\n> I'd prefer to preserve this and remove the latter.\n>\n\nOk.\n\n\n> > diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> > index 9d9e915979..795cf349eb 100644\n> > --- a/src/backend/catalog/heap.c\n> > +++ b/src/backend/catalog/heap.c\n> > @@ -3396,7 +3396,7 @@ List *\n> > heap_truncate_find_FKs(List *relationIds)\n> > {\n> > List *result = NIL;\n> > - List *oids = list_copy(relationIds);\n> > + List *oids;\n>\n> This was just a waste of memory, the fix looks fine.\n>\n> > diff --git a/src/backend/storage/smgr/md.c\n> b/src/backend/storage/smgr/md.c\n> > index c5b771c531..37fbeef841 100644\n> > --- a/src/backend/storage/smgr/md.c\n> > +++ b/src/backend/storage/smgr/md.c\n> > @@ -730,9 +730,11 @@ mdwrite(SMgrRelation reln, ForkNumber forknum,\n> BlockNumber blocknum,\n> > BlockNumber\n> > mdnblocks(SMgrRelation reln, ForkNumber forknum)\n> > {\n> > - MdfdVec *v = mdopenfork(reln, forknum, EXTENSION_FAIL);\n> > + MdfdVec *v;\n> > BlockNumber nblocks;\n> > - BlockNumber segno = 0;\n> > + BlockNumber segno;\n> > +\n> > + mdopenfork(reln, forknum, EXTENSION_FAIL);\n> >\n> > /* mdopen has opened the first segment */\n> > Assert(reln->md_num_open_segs[forknum] > 0);\n>\n> It doesn't seems *to me* an issue.\n>\n\nNot a big deal, but the assignment of the variable v here is a small waste,\nsince it is again highlighted right after.\n\n\n> > diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c\n> > index 8bb00abb6b..7a6a2ecbe9 100644\n> > --- a/src/backend/utils/adt/json.c\n> > +++ b/src/backend/utils/adt/json.c\n> > @@ -990,7 +990,7 @@ catenate_stringinfo_string(StringInfo buffer, const\n> char *addon)\n> > Datum\n> > json_build_object(PG_FUNCTION_ARGS)\n> > {\n> > - int nargs = PG_NARGS();\n> > + int nargs;\n>\n> This part looks fine.\n>\n> > int i;\n> > const char *sep = \"\";\n> > StringInfo result;\n> > @@ -998,6 +998,8 @@ json_build_object(PG_FUNCTION_ARGS)\n> > bool *nulls;\n> > Oid *types;\n> >\n> > + PG_NARGS();\n> > +\n> > /* fetch argument values to build the object */\n> > nargs = extract_variadic_args(fcinfo, 0, false, &args, &types,\n> &nulls);\n>\n> PG_NARGS() doesn't have a side-effect so no need to call independently.\n>\n Sorry, does that mean we can remove it completely?\n\n\n>\n> > diff --git a/src/backend/utils/mmgr/mcxt.c\n> b/src/backend/utils/mmgr/mcxt.c\n> > index 9e24fec72d..fb0e833b2d 100644\n> > --- a/src/backend/utils/mmgr/mcxt.c\n> > +++ b/src/backend/utils/mmgr/mcxt.c\n> > @@ -475,7 +475,7 @@ MemoryContextMemAllocated(MemoryContext context,\n> bool recurse)\n> >\n> > if (recurse)\n> > {\n> > - MemoryContext child = context->firstchild;\n> > + MemoryContext child;\n> >\n> > for (child = context->firstchild;\n> > child != NULL;\n>\n> This looks fine.\n>\n\nThank you for the review and consideration.\n\nbest regards,\nRanier Vilela\n\nEm dom., 29 de mar. de 2020 às 21:57, Kyotaro Horiguchi <horikyota.ntt@gmail.com> escreveu:Hello.\n\nAt Sat, 28 Mar 2020 19:04:00 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> This patch fixes some redundant initilization, that are safe to remove.\n\n> diff --git a/src/backend/access/gist/gistxlog.c b/src/backend/access/gist/gistxlog.c\n> index d3f3a7b803..ffaa2b1ab4 100644\n> --- a/src/backend/access/gist/gistxlog.c\n> +++ b/src/backend/access/gist/gistxlog.c\n> @@ -396,7 +396,7 @@ gistRedoPageReuse(XLogReaderState *record)\n>       if (InHotStandby)\n>       {\n>               FullTransactionId latestRemovedFullXid = xlrec->latestRemovedFullXid;\n> -             FullTransactionId nextFullXid = ReadNextFullTransactionId();\n> +             FullTransactionId nextFullXid;\n\nI'd prefer to preserve this and remove the latter.Ok. \n\n> diff --git a/src/backend/catalog/heap.c b/src/backend/catalog/heap.c\n> index 9d9e915979..795cf349eb 100644\n> --- a/src/backend/catalog/heap.c\n> +++ b/src/backend/catalog/heap.c\n> @@ -3396,7 +3396,7 @@ List *\n>  heap_truncate_find_FKs(List *relationIds)\n>  {\n>       List       *result = NIL;\n> -     List       *oids = list_copy(relationIds);\n> +     List       *oids;\n\nThis was just a waste of memory, the fix looks fine.\n\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index c5b771c531..37fbeef841 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -730,9 +730,11 @@ mdwrite(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum,\n>  BlockNumber\n>  mdnblocks(SMgrRelation reln, ForkNumber forknum)\n>  {\n> -     MdfdVec    *v = mdopenfork(reln, forknum, EXTENSION_FAIL);\n> +     MdfdVec    *v;\n>       BlockNumber nblocks;\n> -     BlockNumber segno = 0;\n> +     BlockNumber segno;\n> +\n> +    mdopenfork(reln, forknum, EXTENSION_FAIL);\n>  \n>       /* mdopen has opened the first segment */\n>       Assert(reln->md_num_open_segs[forknum] > 0);\n\nIt doesn't seems *to me* an issue.Not a big deal, but the assignment of the variable v here is a small waste, since it is again highlighted right after. \n\n> diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c\n> index 8bb00abb6b..7a6a2ecbe9 100644\n> --- a/src/backend/utils/adt/json.c\n> +++ b/src/backend/utils/adt/json.c\n> @@ -990,7 +990,7 @@ catenate_stringinfo_string(StringInfo buffer, const char *addon)\n>  Datum\n>  json_build_object(PG_FUNCTION_ARGS)\n>  {\n> -     int                     nargs = PG_NARGS();\n> +     int                     nargs;\n\nThis part looks fine.\n\n>       int                     i;\n>       const char *sep = \"\";\n>       StringInfo      result;\n> @@ -998,6 +998,8 @@ json_build_object(PG_FUNCTION_ARGS)\n>       bool       *nulls;\n>       Oid                *types;\n>  \n> +    PG_NARGS();\n> +\n>       /* fetch argument values to build the object */\n>       nargs = extract_variadic_args(fcinfo, 0, false, &args, &types, &nulls);\n\nPG_NARGS() doesn't have a side-effect so no need to call independently. Sorry, does that mean we can remove it completely? \n\n> diff --git a/src/backend/utils/mmgr/mcxt.c b/src/backend/utils/mmgr/mcxt.c\n> index 9e24fec72d..fb0e833b2d 100644\n> --- a/src/backend/utils/mmgr/mcxt.c\n> +++ b/src/backend/utils/mmgr/mcxt.c\n> @@ -475,7 +475,7 @@ MemoryContextMemAllocated(MemoryContext context, bool recurse)\n>  \n>       if (recurse)\n>       {\n> -             MemoryContext child = context->firstchild;\n> +             MemoryContext child;\n>  \n>               for (child = context->firstchild;\n>                        child != NULL;\n\nThis looks fine. Thank you for the review and consideration.best regards,Ranier Vilela", "msg_date": "Mon, 30 Mar 2020 09:08:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Hi,\nNew patch with yours suggestions.\n\nbest regards,\nRanier Vilela", "msg_date": "Wed, 1 Apr 2020 08:57:18 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "On Wed, Apr 1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n> Hi,\n> New patch with yours suggestions.\n\nPatch applied to head, thanks.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Thu, 3 Sep 2020 22:57:51 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em qui., 3 de set. de 2020 às 23:57, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> On Wed, Apr 1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n> > Hi,\n> > New patch with yours suggestions.\n>\n> Patch applied to head, thanks.\n>\nThank you Bruce.\n\nregards,\nRanier Vilela\n\nEm qui., 3 de set. de 2020 às 23:57, Bruce Momjian <bruce@momjian.us> escreveu:On Wed, Apr  1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n> Hi,\n> New patch with yours suggestions.\n\nPatch applied to head, thanks.Thank you Bruce.regards,Ranier Vilela", "msg_date": "Fri, 4 Sep 2020 09:39:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "On Fri, Sep 4, 2020 at 09:39:45AM -0300, Ranier Vilela wrote:\n> Em qui., 3 de set. de 2020 �s 23:57, Bruce Momjian <bruce@momjian.us> escreveu:\n> \n> On Wed, Apr� 1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n> > Hi,\n> > New patch with yours suggestions.\n> \n> Patch applied to head, thanks.\n> \n> Thank you Bruce.\n\nI have to say, I am kind of stumped why compilers do not warn of such\ncases, and why we haven't gotten reports about these cases before.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n The usefulness of a cup is in its emptiness, Bruce Lee\n\n\n\n", "msg_date": "Fri, 4 Sep 2020 10:01:23 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em sex., 4 de set. de 2020 às 11:01, Bruce Momjian <bruce@momjian.us>\nescreveu:\n\n> On Fri, Sep 4, 2020 at 09:39:45AM -0300, Ranier Vilela wrote:\n> > Em qui., 3 de set. de 2020 às 23:57, Bruce Momjian <bruce@momjian.us>\n> escreveu:\n> >\n> > On Wed, Apr 1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n> > > Hi,\n> > > New patch with yours suggestions.\n> >\n> > Patch applied to head, thanks.\n> >\n> > Thank you Bruce.\n>\n> I have to say, I am kind of stumped why compilers do not warn of such\n> cases, and why we haven't gotten reports about these cases before.\n>\nI believe it is because, syntactically, there is no error.\n\nI would like to thank Kyotaro Horiguchi,\nmy thanks for your review.\n\nregards,\nRanier Vilela\n\nEm sex., 4 de set. de 2020 às 11:01, Bruce Momjian <bruce@momjian.us> escreveu:On Fri, Sep  4, 2020 at 09:39:45AM -0300, Ranier Vilela wrote:\n> Em qui., 3 de set. de 2020 às 23:57, Bruce Momjian <bruce@momjian.us> escreveu:\n> \n>     On Wed, Apr  1, 2020 at 08:57:18AM -0300, Ranier Vilela wrote:\n>     > Hi,\n>     > New patch with yours suggestions.\n> \n>     Patch applied to head, thanks.\n> \n> Thank you Bruce.\n\nI have to say, I am kind of stumped why compilers do not warn of such\ncases, and why we haven't gotten reports about these cases before.I believe it is because, syntactically, there is no error.I would like to thank Kyotaro Horiguchi,my thanks for your review.regards,Ranier Vilela", "msg_date": "Fri, 4 Sep 2020 13:55:52 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> I have to say, I am kind of stumped why compilers do not warn of such\n> cases, and why we haven't gotten reports about these cases before.\n\nI was just experimenting with clang's \"scan-build\" tool. It finds\nall of the cases you just fixed, and several dozen more beside.\nQuite a few are things that, as a matter of style, we should *not*\nchange, for instance\n\nrewriteHandler.c:2807:5: warning: Value stored to 'outer_reloids' is never read\n outer_reloids = list_delete_last(outer_reloids);\n ^ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nFailing to update the list pointer here would just be asking for bugs.\nHowever, I see some that look like genuine oversights; will go fix.\n\n(I'm not sure how much I trust scan-build overall. It produces a\nwhole bunch of complaints about null pointer dereferences, for instance.\nIf those aren't 99% false positives, we'd be crashing constantly.\nIt's also dog-slow. But it might be something to try occasionally.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Sep 2020 13:40:26 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em sex., 4 de set. de 2020 às 14:40, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Bruce Momjian <bruce@momjian.us> writes:\n> > I have to say, I am kind of stumped why compilers do not warn of such\n> > cases, and why we haven't gotten reports about these cases before.\n>\n> I was just experimenting with clang's \"scan-build\" tool. It finds\n> all of the cases you just fixed, and several dozen more beside.\n> Quite a few are things that, as a matter of style, we should *not*\n> change, for instance\n>\n> rewriteHandler.c:2807:5: warning: Value stored to 'outer_reloids' is never\n> read\n> outer_reloids =\n> list_delete_last(outer_reloids);\n> ^\n> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>\nThere are some like this, in the analyzes that I did.\nEven when it is the last action of the function.\n\n\n> Failing to update the list pointer here would just be asking for bugs.\n> However, I see some that look like genuine oversights; will go fix.\n>\nThanks for fixing this.\n\n\n> (I'm not sure how much I trust scan-build overall. It produces a\n> whole bunch of complaints about null pointer dereferences, for instance.\n> If those aren't 99% false positives, we'd be crashing constantly.\n> It's also dog-slow. But it might be something to try occasionally.)\n>\nI believe it would be very beneficial.\n\nAttached is a patch I made in March/2020, but due to problems,\nit was sent but did not make the list.\nWould you mind taking a look?\nCertainly, if accepted, rebasing would have to be done.\n\nregards,\nRanier Vilela", "msg_date": "Fri, 4 Sep 2020 18:20:01 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em sex., 4 de set. de 2020 às 18:20, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em sex., 4 de set. de 2020 às 14:40, Tom Lane <tgl@sss.pgh.pa.us>\n> escreveu:\n>\n>> Bruce Momjian <bruce@momjian.us> writes:\n>> > I have to say, I am kind of stumped why compilers do not warn of such\n>> > cases, and why we haven't gotten reports about these cases before.\n>>\n>> I was just experimenting with clang's \"scan-build\" tool. It finds\n>> all of the cases you just fixed, and several dozen more beside.\n>> Quite a few are things that, as a matter of style, we should *not*\n>> change, for instance\n>>\n>> rewriteHandler.c:2807:5: warning: Value stored to 'outer_reloids' is\n>> never read\n>> outer_reloids =\n>> list_delete_last(outer_reloids);\n>> ^\n>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n>>\n> There are some like this, in the analyzes that I did.\n> Even when it is the last action of the function.\n>\n>\n>> Failing to update the list pointer here would just be asking for bugs.\n>> However, I see some that look like genuine oversights; will go fix.\n>>\n> Thanks for fixing this.\n>\n>\n>> (I'm not sure how much I trust scan-build overall. It produces a\n>> whole bunch of complaints about null pointer dereferences, for instance.\n>> If those aren't 99% false positives, we'd be crashing constantly.\n>> It's also dog-slow. But it might be something to try occasionally.)\n>>\n> I believe it would be very beneficial.\n>\n> Attached is a patch I made in March/2020, but due to problems,\n> it was sent but did not make the list.\n> Would you mind taking a look?\n> Certainly, if accepted, rebasing would have to be done.\n>\nHere it is simplified, splitted and rebased.\nSome are bogus, others are interesting.\n\nregards,\nRanier Vilela", "msg_date": "Sat, 5 Sep 2020 13:40:37 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n> Attached is a patch I made in March/2020, but due to problems,\n> it was sent but did not make the list.\n> Would you mind taking a look?\n\nI applied some of this, but other parts had been overtaken by\nevents, and there were other changes that I didn't agree with.\n\nA general comment on the sort of \"dead store\" that I don't think\nwe should remove is where a function is trying to maintain an\ninternal invariant, such as \"this pointer points past the last\ndata written to a buffer\" or \"these two variables are in sync\".\nIf the update happens to be the last one in the function, the\ncompiler may be able to see that the store is dead ... but IMO\nit should just optimize such a store away and not get in the\nprogrammer's face about it. If we manually remove the dead\nstore then what we've done is broken the invariant, and we'll\npay for that in future bugs and maintenance costs. Somebody\nmay someday want to add more code after the step in question,\nand if they fail to undo the manual optimization then they've\ngot a bug. Besides which, it's confusing when a function\ndoes something the same way N-1 times and then differently the\nN'th time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 05 Sep 2020 13:29:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Redudant initilization" }, { "msg_contents": "Em sáb., 5 de set. de 2020 às 14:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n> > Attached is a patch I made in March/2020, but due to problems,\n> > it was sent but did not make the list.\n> > Would you mind taking a look?\n>\n> I applied some of this, but other parts had been overtaken by\n> events, and there were other changes that I didn't agree with.\n>\nI fully agree with your judgment.\n\n\n> A general comment on the sort of \"dead store\" that I don't think\n> we should remove is where a function is trying to maintain an\n> internal invariant, such as \"this pointer points past the last\n> data written to a buffer\" or \"these two variables are in sync\".\n> If the update happens to be the last one in the function, the\n> compiler may be able to see that the store is dead ... but IMO\n> it should just optimize such a store away and not get in the\n> programmer's face about it. If we manually remove the dead\n> store then what we've done is broken the invariant, and we'll\n> pay for that in future bugs and maintenance costs. Somebody\n> may someday want to add more code after the step in question,\n> and if they fail to undo the manual optimization then they've\n> got a bug. Besides which, it's confusing when a function\n> does something the same way N-1 times and then differently the\n\nN'th time.\n>\nGood point.\nThe last store is a little strange, but the compiler will certainly\noptimize.\nMaintenance is expensive, and the current code should be the best example.\n\nregards,\nRanier Vilela\n\nEm sáb., 5 de set. de 2020 às 14:29, Tom Lane <tgl@sss.pgh.pa.us> escreveu:Ranier Vilela <ranier.vf@gmail.com> writes:\n> Attached is a patch I made in March/2020, but due to problems,\n> it was sent but did not make the list.\n> Would you mind taking a look?\n\nI applied some of this, but other parts had been overtaken by\nevents, and there were other changes that I didn't agree with.I fully agree with your judgment. \n\nA general comment on the sort of \"dead store\" that I don't think\nwe should remove is where a function is trying to maintain an\ninternal invariant, such as \"this pointer points past the last\ndata written to a buffer\" or \"these two variables are in sync\".\nIf the update happens to be the last one in the function, the\ncompiler may be able to see that the store is dead ... but IMO\nit should just optimize such a store away and not get in the\nprogrammer's face about it.  If we manually remove the dead\nstore then what we've done is broken the invariant, and we'll\npay for that in future bugs and maintenance costs.  Somebody\nmay someday want to add more code after the step in question,\nand if they fail to undo the manual optimization then they've\ngot a bug.  Besides which, it's confusing when a function\ndoes something the same way N-1 times and then differently the\n N'th time.Good point.The last store is a little strange, but the compiler will certainly optimize.Maintenance is expensive, and the current code should be the best example. regards,Ranier Vilela", "msg_date": "Sat, 5 Sep 2020 14:44:20 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Redudant initilization" } ]
[ { "msg_contents": "I happened across this bugreport, which seems to have just enough information\nto be interesting.\n\nhttps://bugs.debian.org/cgi-bin/bugreport.cgi?bug=953204\n|Version: 11.7-0+deb10u1\n|2020-03-05 16:55:55.511 UTC [515] LOG: background worker \"parallel worker\" (PID 884) was terminated by signal 11: Segmentation fault\n|2020-03-05 16:55:55.511 UTC [515] DETAIL: Failed process was running: \n|SELECT distinct student_prob.student_id, student_prob.score, student_name, v_capacity_score.capacity\n|FROM data JOIN model on model.id = 2 AND data_stage(data) = model.target_begin_field_id\n|JOIN student_prob ON data.crm_id = student_prob.student_id AND model.id = student_prob.model_id AND (student_prob.additional_aid < 1)\n|LEFT JOIN v_capacity_score ON data.crm_id = v_capacity_score.student_id AND student_prob.model_id = v_capacity_score.model_id\n|WHERE data.term_code = '202090' AND student_prob.score > 0\n|ORDER BY student_prob.score DESC, student_name\n|LIMIT 100 OFFSET 100 ;\n\nTim: it'd be nice to get more information, if and when possible:\n - \"explain\" plan for that query;\n - \\d for the tables involved: constraints, inheritence, defaults;\n - corefile or backtrace; it looks like there's two different crashes (maybe same problem) so both would be useful;\n - Can you reprodue the crash if you \"SET max_parallel_workers_per_gather=0\" ?\n - Do you know if it crashed under v11.6 ?\n\nIf anyone wants to hack on the .deb:\nhttps://packages.debian.org/buster/amd64/postgresql-11/download and (I couldn't find the dbg package anywhere else)\nhttps://snapshot.debian.org/package/postgresql-11/11.7-0%2Bdeb10u1/#postgresql-11-dbgsym_11.7-0:2b:deb10u1\n\n$ mkdir pg11\n$ cd pg11\n$ wget -q http://security.debian.org/debian-security/pool/updates/main/p/postgresql-11/postgresql-11_11.7-0+deb10u1_amd64.deb\n$ ar x ./postgresql-11_11.7-0+deb10u1_amd64.deb \n$ tar xf ./data.tar.xz \n$ ar x postgresql-11-dbgsym_11.7-0+deb10u1_amd64.deb\n$ tar tf data.tar.xz\n$ gdb usr/lib/postgresql/11/bin/postgres \n\n(gdb) set debug-file-directory usr/lib/debug/\n(gdb) file usr/lib/postgresql/11/bin/postmaster \n(gdb) info target \n\nIf I repeat the process Bernhard used (thanks for that) on the first crash in\nlibc6, I get:\n\n(gdb) find /b 0x0000000000022320, 0x000000000016839b, 0xf9, 0x20, 0x77, 0x1f, 0xc5, 0xfd, 0x74, 0x0f, 0xc5, 0xfd, 0xd7, 0xc1, 0x85, 0xc0, 0x0f, 0x85, 0xdf, 0x00, 0x00, 0x00, 0x48, 0x83, 0xc7, 0x20, 0x83, 0xe1, 0x1f, 0x48, 0x83, 0xe7, 0xe0, 0xeb, 0x36, 0x66, 0x90, 0x83, 0xe1, 0x1f, 0x48, 0x83, 0xe7, 0xe0, 0xc5, 0xfd, 0x74, 0x0f, 0xc5, 0xfd, 0xd7, 0xc1, 0xd3, 0xf8, 0x85, 0xc0, 0x74, 0x1b, 0xf3, 0x0f, 0xbc, 0xc0, 0x48, 0x01, 0xf8, 0x48\n0x15c17d <__strlen_avx2+13>\nwarning: Unable to access 1631 bytes of target memory at 0x167d3d, halting search.\n1 pattern found.\n\nI'm tentatively guessing that heap_modify_tuple() is involved, since it calls\ngetmissingattr and (probably) fill_val. It looks like maybe some data\nstructure is corrupted which crashed two parallel workers, one in\nfill_val()/strlen() and one in heap_deform_tuple()/getmissingattr(). Maybe\nsomething not initialized in parallel worker, or a use-after-free? I'll stop\nguessing.\n\nJustin\n\n\n", "msg_date": "Sat, 28 Mar 2020 17:30:52 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "debian bugrept involving fast default crash in pg11.7" }, { "msg_contents": "I've attached a file containing the \\d+ for all the tables involved and \nthe EXPLAIN ANALYZE for the query.\n\nYes, the crash happened under v11.6. I had tried downgrading when I \nfirst encountered the problem.\n\nWhile trying to put together this information the crash started \nhappening less frequently (I was only able to reproduce it it twice and \nnot in a row) and I am unable to confirm if SET \nmax_parallel_workers_per_gather=0 had any effect.\n\nAlso since I've been able to reproduce I'm currently unable to provide a \ncorefile or backtrace. I'll continue to try and reproduce the error so \nI can get one or the other.\n\nI did find a work around for the crash by making the view \n(v_capacity_score) a materialized view.\n\nThanks\ntim\n\nOn 3/28/20 6:30 PM, Justin Pryzby wrote:\n> I happened across this bugreport, which seems to have just enough information\n> to be interesting.\n> \n> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=953204\n> |Version: 11.7-0+deb10u1\n> |2020-03-05 16:55:55.511 UTC [515] LOG: background worker \"parallel worker\" (PID 884) was terminated by signal 11: Segmentation fault\n> |2020-03-05 16:55:55.511 UTC [515] DETAIL: Failed process was running:\n> |SELECT distinct student_prob.student_id, student_prob.score, student_name, v_capacity_score.capacity\n> |FROM data JOIN model on model.id = 2 AND data_stage(data) = model.target_begin_field_id\n> |JOIN student_prob ON data.crm_id = student_prob.student_id AND model.id = student_prob.model_id AND (student_prob.additional_aid < 1)\n> |LEFT JOIN v_capacity_score ON data.crm_id = v_capacity_score.student_id AND student_prob.model_id = v_capacity_score.model_id\n> |WHERE data.term_code = '202090' AND student_prob.score > 0\n> |ORDER BY student_prob.score DESC, student_name\n> |LIMIT 100 OFFSET 100 ;\n> \n> Tim: it'd be nice to get more information, if and when possible:\n> - \"explain\" plan for that query;\n> - \\d for the tables involved: constraints, inheritence, defaults;\n> - corefile or backtrace; it looks like there's two different crashes (maybe same problem) so both would be useful;\n> - Can you reprodue the crash if you \"SET max_parallel_workers_per_gather=0\" ?\n> - Do you know if it crashed under v11.6 ?\n> \n> If anyone wants to hack on the .deb:\n> https://packages.debian.org/buster/amd64/postgresql-11/download and (I couldn't find the dbg package anywhere else)\n> https://snapshot.debian.org/package/postgresql-11/11.7-0%2Bdeb10u1/#postgresql-11-dbgsym_11.7-0:2b:deb10u1\n> \n> $ mkdir pg11\n> $ cd pg11\n> $ wget -q http://security.debian.org/debian-security/pool/updates/main/p/postgresql-11/postgresql-11_11.7-0+deb10u1_amd64.deb\n> $ ar x ./postgresql-11_11.7-0+deb10u1_amd64.deb\n> $ tar xf ./data.tar.xz\n> $ ar x postgresql-11-dbgsym_11.7-0+deb10u1_amd64.deb\n> $ tar tf data.tar.xz\n> $ gdb usr/lib/postgresql/11/bin/postgres\n> \n> (gdb) set debug-file-directory usr/lib/debug/\n> (gdb) file usr/lib/postgresql/11/bin/postmaster\n> (gdb) info target\n> \n> If I repeat the process Bernhard used (thanks for that) on the first crash in\n> libc6, I get:\n> \n> (gdb) find /b 0x0000000000022320, 0x000000000016839b, 0xf9, 0x20, 0x77, 0x1f, 0xc5, 0xfd, 0x74, 0x0f, 0xc5, 0xfd, 0xd7, 0xc1, 0x85, 0xc0, 0x0f, 0x85, 0xdf, 0x00, 0x00, 0x00, 0x48, 0x83, 0xc7, 0x20, 0x83, 0xe1, 0x1f, 0x48, 0x83, 0xe7, 0xe0, 0xeb, 0x36, 0x66, 0x90, 0x83, 0xe1, 0x1f, 0x48, 0x83, 0xe7, 0xe0, 0xc5, 0xfd, 0x74, 0x0f, 0xc5, 0xfd, 0xd7, 0xc1, 0xd3, 0xf8, 0x85, 0xc0, 0x74, 0x1b, 0xf3, 0x0f, 0xbc, 0xc0, 0x48, 0x01, 0xf8, 0x48\n> 0x15c17d <__strlen_avx2+13>\n> warning: Unable to access 1631 bytes of target memory at 0x167d3d, halting search.\n> 1 pattern found.\n> \n> I'm tentatively guessing that heap_modify_tuple() is involved, since it calls\n> getmissingattr and (probably) fill_val. It looks like maybe some data\n> structure is corrupted which crashed two parallel workers, one in\n> fill_val()/strlen() and one in heap_deform_tuple()/getmissingattr(). Maybe\n> something not initialized in parallel worker, or a use-after-free? I'll stop\n> guessing.\n> \n> Justin\n>", "msg_date": "Thu, 9 Apr 2020 14:05:22 -0400", "msg_from": "Tim Bishop <tim@inroads.ai>", "msg_from_op": false, "msg_subject": "Re: debian bugrept involving fast default crash in pg11.7" }, { "msg_contents": "On Thu, Apr 09, 2020 at 02:05:22PM -0400, Tim Bishop wrote:\n> I've attached a file containing the \\d+ for all the tables involved and the\n> EXPLAIN ANALYZE for the query.\n\nThanks for your response.\n\nDo you know if you used the \"fast default feature\" ?\nThat would happen if you did \"ALTER TABLE tbl ADD col DEFAULT val\"\n\nI guess this is the way to tell:\nSELECT attrelid::regclass, * FROM pg_attribute WHERE atthasmissing;\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Apr 2020 13:31:26 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: debian bugrept involving fast default crash in pg11.7" }, { "msg_contents": "SELECT attrelid::regclass, * FROM pg_attribute WHERE atthasmissing;\n-[ RECORD 1 ]-+---------\nattrelid | download\nattrelid | 22749\nattname | filetype\natttypid | 1043\nattstattarget | -1\nattlen | -1\nattnum | 5\nattndims | 0\nattcacheoff | -1\natttypmod | 36\nattbyval | f\nattstorage | x\nattalign | i\nattnotnull | t\natthasdef | t\natthasmissing | t\nattidentity |\nattisdropped | f\nattislocal | t\nattinhcount | 0\nattcollation | 100\nattacl |\nattoptions |\nattfdwoptions |\nattmissingval | {csv}\n\n\nOn 4/9/20 2:31 PM, Justin Pryzby wrote:\n> On Thu, Apr 09, 2020 at 02:05:22PM -0400, Tim Bishop wrote:\n>> I've attached a file containing the \\d+ for all the tables involved and the\n>> EXPLAIN ANALYZE for the query.\n> \n> Thanks for your response.\n> \n> Do you know if you used the \"fast default feature\" ?\n> That would happen if you did \"ALTER TABLE tbl ADD col DEFAULT val\"\n> \n> I guess this is the way to tell:\n> SELECT attrelid::regclass, * FROM pg_attribute WHERE atthasmissing;\n> \n\n\n\n", "msg_date": "Thu, 9 Apr 2020 14:36:26 -0400", "msg_from": "Tim Bishop <tim@inroads.ai>", "msg_from_op": false, "msg_subject": "Re: debian bugrept involving fast default crash in pg11.7" }, { "msg_contents": "On Thu, Apr 09, 2020 at 02:36:26PM -0400, Tim Bishop wrote:\n> SELECT attrelid::regclass, * FROM pg_attribute WHERE atthasmissing;\n> -[ RECORD 1 ]-+---------\n> attrelid | download\n> attrelid | 22749\n> attname | filetype\n\nBut that table isn't involved in the crashing query, right ?\nAre data_stage() and income_index() locally defined functions ? PLPGSQL ??\nDo they access the download table (or view or whatever it is) ?\n\nThanks,\n-- \nJustin\n\n\n", "msg_date": "Thu, 9 Apr 2020 15:39:49 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "Re: debian bugrept involving fast default crash in pg11.7" }, { "msg_contents": "\nOn 4/9/20 4:39 PM, Justin Pryzby wrote:\n> On Thu, Apr 09, 2020 at 02:36:26PM -0400, Tim Bishop wrote:\n>> SELECT attrelid::regclass, * FROM pg_attribute WHERE atthasmissing;\n>> -[ RECORD 1 ]-+---------\n>> attrelid | download\n>> attrelid | 22749\n>> attname | filetype\n> But that table isn't involved in the crashing query, right ?\n> Are data_stage() and income_index() locally defined functions ? PLPGSQL ??\n> Do they access the download table (or view or whatever it is) ?\n>\n\nAs requested I have reviewed this old thread. You are correct, this\ntable is not involved in the query. That doesn't mean that the changes\nmade by the fast default code haven't caused a problem, but it makes it\na bit less likely. If the OP is still interested and can provide a\nself-contained recipe to reproduce the problem I can investigate.\nWithout that it's difficult to know what to look at.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 4 Apr 2021 17:54:51 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: debian bugrept involving fast default crash in pg11.7" } ]
[ { "msg_contents": "During a stress test of an experimental patch (which implements a new\ntechnique for managing B-Tree index bloat caused by non-HOT updates),\nI noticed a minor bug in _bt_truncate(). The issue affects Postgres 12\n+ master.\n\nThe problem is that INCLUDE indexes don't have their non-key\nattributes physically truncated away in a small minority of cases. I\nsay \"physically\" because we nevertheless encode the number of\nremaining key attributes in the tuple correctly in all cases (we also\ncorrectly encode the presence or absence of the special pivot tuple\nheap TID representation in all cases). That is, we don't consistently\navoid non-key attribute space overhead in the new high key, even\nthough _bt_truncate() is clearly supposed to consistently avoid said\noverhead. Even when it must add a heap TID to the high key using the\nspecial pivot representation of heap TID, it shouldn't have to keep\naround the non-key attributes.\n\nThis only happens when we cannot truncate away any key columns (and\nmust therefore include a heap TID in the new leaf high key) in an\nINCLUDE index. This condition is rare because in general nbtsplitloc.c\ngoes out of its way to at least avoid having to include a heap TID in\nnew leaf page high keys. It's also rare because INCLUDE indexes are\ngenerally only used with unique constraints/indexes, which makes it\nparticularly likely that nbtsplitloc.c will be able to avoid including\na heap TID in the final high key (unique indexes seldom have enough\nduplicates to make nbtsplitloc.c ever use its \"single value\"\nstrategy).\n\nAttached patch fixes the issue. Barring objections, I'll push this to\nv12 + master branches early next week. The bug is low severity, but\nthen the fix is very low risk.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 28 Mar 2020 17:11:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Minor bug in suffix truncation of non-key attributes from INCLUDE\n indexes" } ]
[ { "msg_contents": "Commit 0fb54de9a (\"Support building with Visual Studio 2015\")\nintroduced a hack in chklocale.c's win32_langinfo() to make\nit use GetLocaleInfoEx() in place of _create_locale().\n\nThere's a problem with this, which is that if I'm reading the\ndocs correctly, GetLocaleInfoEx() accepts a smaller set of\npossible locale strings (only \"locale names\") than do either\n_create_locale() or setlocale(). The _create_locale() docs say\n\n The locale argument can take a locale name, a language string, a\n language string and country/region code, a code page, or a language\n string, country/region code, and code page.\n\nand they imply (but don't quite manage to say in so many words)\nthat these are the same strings accepted by setlocale().\n\nThe reason this is a problem is that when given a locale string,\nin either initdb or CREATE DATABASE, we first validate it by\nseeing if setlocale() likes it. We produce a reasonable error\nmessage if not. Otherwise we then go on to try to identify the\nimplied encoding via chklocale.c. But if GetLocaleInfoEx()\nfails, we fall back to trying to parse out the codepage for\nourselves, which can lead to a silly/unhelpful error message,\nas recently complained of at [1].\n\nThe reason for the hack, per the comments, is that VS2015\nomits a codepage field from the result of _create_locale();\nand some optimism is expressed therein that Microsoft might\nundo that oversight in future. Has this been fixed in more\nrecent VS versions? If not, can we find another, more robust\nway to do it?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/F4D04849032C4464B8FF17CB0F896F9E%40dell2\n\n\n", "msg_date": "Sat, 28 Mar 2020 21:29:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Can we get rid of GetLocaleInfoEx() yet?" }, { "msg_contents": "On Sun, Mar 29, 2020 at 3:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> The reason for the hack, per the comments, is that VS2015\n> omits a codepage field from the result of _create_locale();\n> and some optimism is expressed therein that Microsoft might\n> undo that oversight in future. Has this been fixed in more\n> recent VS versions? If not, can we find another, more robust\n> way to do it?\n>\n\nWhile working on another issue I have seen this issue reproduce in VS2019.\nSo no, it has not been fixed.\n\nPlease find attached a patch that provides a better detection of the \"uft8\"\ncases.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Sun, 29 Mar 2020 10:36:32 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of GetLocaleInfoEx() yet?" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> On Sun, Mar 29, 2020 at 3:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The reason for the hack, per the comments, is that VS2015\n>> omits a codepage field from the result of _create_locale();\n>> and some optimism is expressed therein that Microsoft might\n>> undo that oversight in future. Has this been fixed in more\n>> recent VS versions? If not, can we find another, more robust\n>> way to do it?\n\n> While working on another issue I have seen this issue reproduce in VS2019.\n> So no, it has not been fixed.\n\nOh well, I figured that was too optimistic :-(\n\n> Please find attached a patch that provides a better detection of the \"uft8\"\n> cases.\n\nIn general, I think the problem is that we might be dealing with a\nUnix-style locale string, in which the encoding name might be quite\na few other things besides \"utf8\". But actually your patch works\nfor that too, since what's going to happen next is we'll search the\nencoding_match_list[] for a match. I do suggest being a bit more\nparanoid about what's a codepage number though, as attached.\n(Untested, since I lack a Windows environment, but it's pretty\nstraightforward code.)\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 29 Mar 2020 13:00:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of GetLocaleInfoEx() yet?" }, { "msg_contents": "On Sun, Mar 29, 2020 at 7:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> In general, I think the problem is that we might be dealing with a\n> Unix-style locale string, in which the encoding name might be quite\n> a few other things besides \"utf8\". But actually your patch works\n> for that too, since what's going to happen next is we'll search the\n> encoding_match_list[] for a match. I do suggest being a bit more\n> paranoid about what's a codepage number though, as attached.\n> (Untested, since I lack a Windows environment, but it's pretty\n> straightforward code.)\n>\n\nIt works for the issue just fine, and more comments make a better a\npatch, so no objections from me.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Sun, Mar 29, 2020 at 7:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nIn general, I think the problem is that we might be dealing with a\nUnix-style locale string, in which the encoding name might be quite\na few other things besides \"utf8\".  But actually your patch works\nfor that too, since what's going to happen next is we'll search the\nencoding_match_list[] for a match.  I do suggest being a bit more\nparanoid about what's a codepage number though, as attached.\n(Untested, since I lack a Windows environment, but it's pretty\nstraightforward code.)It works for the issue just fine, and more comments make a better a patch, so no objections from me.Regards,Juan José Santamaría Flecha", "msg_date": "Sun, 29 Mar 2020 19:06:56 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of GetLocaleInfoEx() yet?" }, { "msg_contents": "On Sun, Mar 29, 2020 at 07:06:56PM +0200, Juan José Santamaría Flecha wrote:\n> It works for the issue just fine, and more comments make a better a\n> patch, so no objections from me.\n\n+1 from me. And yes, we are still missing lc_codepage in newer\nversions of VS. Locales + Windows != 2, business as usual.\n--\nMichael", "msg_date": "Mon, 30 Mar 2020 16:57:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Can we get rid of GetLocaleInfoEx() yet?" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Mar 29, 2020 at 07:06:56PM +0200, Juan José Santamaría Flecha wrote:\n>> It works for the issue just fine, and more comments make a better a\n>> patch, so no objections from me.\n\n> +1 from me. And yes, we are still missing lc_codepage in newer\n> versions of VS. Locales + Windows != 2, business as usual.\n\nOK, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 11:15:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Can we get rid of GetLocaleInfoEx() yet?" } ]
[ { "msg_contents": "Buildfarm member snapper has been crashing in the core regression tests\nsince commit 17a28b0364 (well, there's a bit of a range of uncertainty\nthere, but 17a28b0364 looks to be the only such commit that could have\naffected code in gistget.c where the crash is). Curiously, its sibling\nskate is *not* failing, despite being on the same machine and compiler.\n\nI looked into this by dint of setting up a similar environment in a\nqemu VM. I might not have reproduced things exactly, but I managed\nto get the same kind of crash at approximately the same place, and\nwhat it looks like to me is a compiler bug. It's iterating\ngistindex_keytest's loop over search keys\n\n ScanKey key = scan->keyData;\n int keySize = scan->numberOfKeys;\n\n while (keySize > 0)\n {\n ...\n key++;\n keySize--;\n }\n\none time too many, and accessing a garbage ScanKey value off the end of\nthe keyData array, leading to a function call into never-never land.\n\nI'm no expert on Sparc assembly code, but it looks like the compiler\nforgot the \",a\" (annul) modifier here:\n\n\t.loc 1 181 0\n\tandcc\t%o7, 64, %g0\n\tbe,pt\t%icc, .L134 <----\n\t addcc\t%l5, -1, %l5\n\t.loc 1 183 0\n\tlduh\t[%i4+16], %o7\n\tadd\t%i4, %o7, %o7\n\tlduh\t[%o7+12], %o7\n\tandcc\t%o7, 1, %g0\n\tbne,pt\t%icc, .L141\n\t ld\t[%fp-32], %g2\n\t.loc 1 163 0\n\tba,pt\t%xcc, .L134\n\t addcc\t%l5, -1, %l5\n\ncausing %l5 (which contains the keySize value) to be decremented\nan extra time in the case where that branch is not taken and\nwe fall through as far as the \"ba\" at the end. Even that would\nnot be fatal, perhaps, except that the compiler also decided to\noptimize the \"keySize > 0\" test to \"keySize != 0\", for ghod only\nknows what reason (surely it's not any faster on a RISC machine),\nso that arriving at .L134 with %l5 containing -1 allows the loop\nto be iterated again. Kaboom.\n\nIt's unclear how 17a28b0364 would have affected this, but there is\nan elog() call elsewhere in the same function, so maybe the new\ncoding for that changed register assignments or some other\nphase-of-the-moon effect.\n\nI doubt that anyone's going to take much interest in fixing this\nold compiler version, so my recommendation is to back off the\noptimization level on snapper to -O1, and probably on skate as\nwell because there's no obvious reason why the same compiler bug\nmight not bite skate at some point. I was able to get through\nthe core regression tests on my qemu VM after recompiling\ngistget.c at -O1 (with other flags the same as snapper is using).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 Mar 2020 23:50:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "snapper vs. HEAD" }, { "msg_contents": "Hi,\n\nOn 2020-03-28 23:50:32 -0400, Tom Lane wrote:\n> Buildfarm member snapper has been crashing in the core regression tests\n> since commit 17a28b0364 (well, there's a bit of a range of uncertainty\n> there, but 17a28b0364 looks to be the only such commit that could have\n> affected code in gistget.c where the crash is). Curiously, its sibling\n> skate is *not* failing, despite being on the same machine and compiler.\n\nHm. There's some difference in code-gen specific options.\n\nsnapper has:\n'CFLAGS' => '-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security ',\n'CPPFLAGS' => '-D_FORTIFY_SOURCE=2',\n'LDFLAGS' => '-Wl,-z,relro -Wl,-z,now'\nand specifies (among others)\n '--enable-thread-safety',\n '--with-gnu-ld',\nwhereas skate has --enable-cassert.\n\nNot too hard to imagine that several of these could cause enough\ncode-gen differences so that one exhibits the bug, and the other\ndoesn't.\n\n\nThe different commandlines for gistget end up being:\n\nsnapper:\nccache gcc-4.7 -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -I../../../../src/include -D_FORTIFY_SOURCE=2 -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include/mit-krb5 -c -o gistget.o gistget.c\nskate:\nccache gcc-4.7 -std=gnu99 -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2 -I../../../../src/include -D_GNU_SOURCE -I/usr/include/libxml2 -c -o gistget.o gistget.c\n\n\n> I looked into this by dint of setting up a similar environment in a\n> qemu VM. I might not have reproduced things exactly, but I managed\n> to get the same kind of crash at approximately the same place, and\n> what it looks like to me is a compiler bug.\n\nWhat options were you using? Reproducing snapper as exactly as possible?\n\n\n> It's unclear how 17a28b0364 would have affected this, but there is\n> an elog() call elsewhere in the same function, so maybe the new\n> coding for that changed register assignments or some other\n> phase-of-the-moon effect.\n\nYea, wouldn't be too surprising.\n\n\n> I doubt that anyone's going to take much interest in fixing this\n> old compiler version, so my recommendation is to back off the\n> optimization level on snapper to -O1, and probably on skate as\n> well because there's no obvious reason why the same compiler bug\n> might not bite skate at some point. I was able to get through\n> the core regression tests on my qemu VM after recompiling\n> gistget.c at -O1 (with other flags the same as snapper is using).\n\nIf you still have the environment it might make sense to check wether\nit's related to one of the other options. But otherwise I wouldn't be\nagainst the proposal.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Mar 2020 16:17:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-28 23:50:32 -0400, Tom Lane wrote:\n>> ... Curiously, its sibling\n>> skate is *not* failing, despite being on the same machine and compiler.\n\n> Hm. There's some difference in code-gen specific options.\n\nYeah, I confirmed that on my VM install too: using our typical codegen\noptions (just -O2 -g), the regression tests pass, matching skate's\nresults. It fell over after I matched snapper's CFLAGS, CPPFLAGS,\nand LDFLAGS. I didn't try to break things down more finely as to\nwhich option(s) trigger the bad code, since it looks like it's probably\nsome purely-internal compiler state ...\n\n> What options were you using? Reproducing snapper as exactly as possible?\n\nYeah, see above.\n\n> If you still have the environment it might make sense to check wether\n> it's related to one of the other options. But otherwise I wouldn't be\n> against the proposal.\n\nI could, but it's mighty slow, so I don't especially want to try all 2^N\ncombinations. Do you have any specific cases in mind?\n\n(I guess we can exclude LDFLAGS, since the assembly output is visibly\nbad.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Mar 2020 20:25:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Hi,\n\nOn 2020-03-29 20:25:32 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > If you still have the environment it might make sense to check wether\n> > it's related to one of the other options. But otherwise I wouldn't be\n> > against the proposal.\n> \n> I could, but it's mighty slow, so I don't especially want to try all 2^N\n> combinations. Do you have any specific cases in mind?\n\nI'd be most suspicious of -fstack-protector --param=ssp-buffer-size=4\nand -D_FORTIFY_SOURCE=2. The first two have direct codegen implications,\nthe latter can lead to quite different headers being included and adds a\nlot of size tracking to the optimizer.\n\n\n> (I guess we can exclude LDFLAGS, since the assembly output is visibly\n> bad.)\n\nSeems likely.\n\nIs it visibly bad when looking at the .s of gistget.c \"directly\", or\nwhen disassembling the fiinal binary? Because I've seen linkers screw up\non a larger scale than I'd have expected in the past.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 29 Mar 2020 17:56:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is it visibly bad when looking at the .s of gistget.c \"directly\", or\n> when disassembling the fiinal binary? Because I've seen linkers screw up\n> on a larger scale than I'd have expected in the past.\n\nYes, the bogus asm that I showed was from gistget.s, not from\ndisassembling.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Mar 2020 21:08:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-29 20:25:32 -0400, Tom Lane wrote:\n>> I could, but it's mighty slow, so I don't especially want to try all 2^N\n>> combinations. Do you have any specific cases in mind?\n\n> I'd be most suspicious of -fstack-protector --param=ssp-buffer-size=4\n> and -D_FORTIFY_SOURCE=2. The first two have direct codegen implications,\n> the latter can lead to quite different headers being included and adds a\n> lot of size tracking to the optimizer.\n\nIt occurred to me that just recompiling gistget.c, rather than the whole\nbackend, would be enough to prove the point. So after a few trials:\n\n* Removing \"-fstack-protector --param=ssp-buffer-size=4\" does nothing;\nthe generated .o file is bitwise the same.\n\n* Removing -D_FORTIFY_SOURCE=2 does change the bits, but it still\ncrashes.\n\nSo that eliminates all of snapper's special compile options :-(.\nI'm forced to the conclusion that the important difference between\nsnapper and skate is that the latter uses --enable-cassert and the\nformer doesn't, because that's the only remaining difference between\nhow I built a working version before and the not-working version\nI have right now. Which means that this really is pretty much a\nphase-of-the-moon compiler bug, and we've just been very lucky\nthat we haven't tripped over it before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Mar 2020 22:10:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "I wrote:\n> I'm forced to the conclusion that the important difference between\n> snapper and skate is that the latter uses --enable-cassert and the\n> former doesn't, because that's the only remaining difference between\n> how I built a working version before and the not-working version\n> I have right now.\n\nConfirmed: building gistget with --enable-cassert, and all of snapper's\ncompile/link options, produces something that passes regression.\n\nThe generated asm differs in a whole lot of details, but it looks like\nthe compiler remembers to annul the branch delay slot in all the\nrelevant places:\n\n\t.loc 1 163 0\n\taddcc\t%l7, -1, %l7\n.L186:\n\tbe,pn\t%icc, .L80\n\t add\t%l6, 48, %l6\n...\n\t.loc 1 189 0\n\tbe,a,pt\t%icc, .L186\n\t addcc\t%l7, -1, %l7\n...\n\t.loc 1 183 0\n\tlduh\t[%g4+12], %g4\n\tandcc\t%g4, 1, %g0\n\tbe,a,pt\t%icc, .L186\n\t addcc\t%l7, -1, %l7\n\tandcc\t%o7, 0xff, %g0\n\tbne,a,pt %icc, .L186\n\t addcc\t%l7, -1, %l7\n\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 00:26:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Hi,\n\nTom Lane wrote:\n> Confirmed: building gistget with --enable-cassert, and all of snapper's\n> compile/link options, produces something that passes regression.\n\nSkate uses buildfarm default configuration, whereas snapper uses settings which are used when building postgresql debian packages. Debian packages are built without --enable-cassert, but most buildfarm animals build with --enable-cassert. I specifically configured skate and snapper like that because I ran into such issues in the past (where debian packages would fail to build on sparc, but buildfarm animals on debian sparc did not highlight the issue).\n\nIn the past, I've already switched from gcc 4.6 to gcc 4.7 as a workaround for a similar compiler bug, but I can't upgrade to a newer gcc without backporting it myself, so for the moment I've switched snapper to use -O1 instead of -O2, for HEAD only.\n\nNot sure whether wheezy on sparc 32-bit is very relevant today, but it's an exotic platform, so I try to keep those buildfarm animals alive as long as it's possible.\n\nBest regards,\nTom Turelinckx\n\n\n", "msg_date": "Mon, 30 Mar 2020 11:17:16 +0200", "msg_from": "\"Tom Turelinckx\" <pgbf@twiska.com>", "msg_from_op": false, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "\"Tom Turelinckx\" <pgbf@twiska.com> writes:\n> In the past, I've already switched from gcc 4.6 to gcc 4.7 as a workaround for a similar compiler bug, but I can't upgrade to a newer gcc without backporting it myself, so for the moment I've switched snapper to use -O1 instead of -O2, for HEAD only.\n\nThanks! But it doesn't seem to have taken: snapper just did a new run\nthat still failed, and it still seems to be using -O2.\n\n> Not sure whether wheezy on sparc 32-bit is very relevant today, but it's\n> an exotic platform, so I try to keep those buildfarm animals alive as\n> long as it's possible.\n\nYeah, I've got a couple of those myself. But perhaps it'd be sensible\nto move to a newer Debian LTS release? Or have they dropped Sparc\nsupport altogether?\n\n(As of this weekend, it seemed to be impossible to find the wheezy sparc\ndistribution images on-line anymore. Fortunately I still had a download\nof the dvd-1 image stashed away, or I would not have been able to recreate\nmy qemu VM for the purpose. It's going to be very hard for any other PG\nhackers to investigate that platform in future.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 12:24:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Hi,\n\nTom Lane wrote:\n> Thanks! But it doesn't seem to have taken: snapper just did a new run\n> that still failed, and it still seems to be using -O2.\n\nSnapper did build using -O1 a few hours ago, but it failed the check stage very early with a different error:\n\nFATAL: null value in column \"classid\" of relation \"pg_depend\" violates not-null constraint\n\nI then cleared out the ccache and forced a build of HEAD: same issue.\n\nNext I cleared out the ccache and forced a build of HEAD with -O2: this is the one you saw.\n\nFinally, I've cleared out both the ccache and the accache and forced a build of HEAD with -O1. It failed the check stage again very early with the above error.\n\n> to move to a newer Debian LTS release? Or have they dropped Sparc\n> support altogether?\n\nWheezy was the last stable release for Debian sparc. Sparc64 is a Debian ports architecture, but there are no stable releases for sparc64. I do maintain private sparc64 repositories for Stretch and Buster, and I could configure buildfarm animals for those (on faster hardware too), but those releases are not officially available.\n\nBest regards,\nTom Turelinckx\n\n\n", "msg_date": "Mon, 30 Mar 2020 19:23:42 +0200", "msg_from": "\"Tom Turelinckx\" <pgbf@twiska.com>", "msg_from_op": false, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Hi,\n\nOn 2020-03-30 12:24:06 -0400, Tom Lane wrote:\n> \"Tom Turelinckx\" <pgbf@twiska.com> writes:\n> Yeah, I've got a couple of those myself. But perhaps it'd be sensible\n> to move to a newer Debian LTS release? Or have they dropped Sparc\n> support altogether?\n\nI think the 32bit sparc support has been phased out. Sparc64 isn't a\n\"official port\", but there's a port:\nhttps://wiki.debian.org/Sparc64\nincluding seemingly regularly updated images.\n\n\n> (As of this weekend, it seemed to be impossible to find the wheezy sparc\n> distribution images on-line anymore. Fortunately I still had a download\n> of the dvd-1 image stashed away, or I would not have been able to recreate\n> my qemu VM for the purpose. It's going to be very hard for any other PG\n> hackers to investigate that platform in future.)\n\nThey've been moved to archive.debian.org, but they should still be\ndownloadable. Seems like the website hasn't been quite updated to that\nfact...\n\nThe installer downloads are still available at:\nhttps://www.debian.org/releases/wheezy/debian-installer/\n\nbut sources.list would need to be pointed at something like\n\ndeb http://archive.debian.org/debian/ wheezy contrib main non-free\n\nSee also https://www.debian.org/distrib/archive\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 10:24:44 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-03-30 12:24:06 -0400, Tom Lane wrote:\n>> (As of this weekend, it seemed to be impossible to find the wheezy sparc\n>> distribution images on-line anymore.\n\n> The installer downloads are still available at:\n> https://www.debian.org/releases/wheezy/debian-installer/\n\nAh, I should have clarified. That's 7.11, which I'd tried last time\nI was interested in duplicating snapper, and I found that it does\nnot work under qemu's sparc emulation (my notes don't have much detail,\nbut some installer component was core-dumping). What did work, after\nsome hair pulling, was 7.6 ... and that's the version I can no longer\nfind on-line. But it has what seems to be the same gcc release as\nsnapper is using, so I figured it was close enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 13:40:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" }, { "msg_contents": "\"Tom Turelinckx\" <pgbf@twiska.com> writes:\n> Tom Lane wrote:\n>> Thanks! But it doesn't seem to have taken: snapper just did a new run\n>> that still failed, and it still seems to be using -O2.\n\n> Snapper did build using -O1 a few hours ago, but it failed the check stage very early with a different error:\n> FATAL: null value in column \"classid\" of relation \"pg_depend\" violates not-null constraint\n\nUgh. Compiler bugs coming out of the woodwork?\n\nNot sure what to suggest here. It certainly is useful to us to have\nsparc buildfarm testing, but probably sparc64 would be more useful\nthan sparc32. (It looks like FreeBSD and OpenBSD have dropped sparc32\naltogether, and NetBSD has bumped it to tier II status.) One idea\nis to try -O0 ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 13:51:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: snapper vs. HEAD" } ]
[ { "msg_contents": "Hi,I want to apply to GSoC and this is my proposal draft. Please give me a feedback.---I am a 4-year undergraduate CS student at Ural Federal University. This fall I met PostgreSQL and posted the first PR to WAL-G. I liked it so much that I decided to do the graduate work related to WAL-G - to implement backup(s) copying from one store to another (while WIP, I'm waiting for a review). I liked writing on golang, sorting out what works, and then I accidentally found out about GSoC and decided to give it a try. I saw a list of suggested problems at https://wiki.postgresql.org/wiki/GSoC_2020#WAL-G_performance_improvements_.282020.29 and I was very interested in WAL's history consistency check and page checksum verification features. In mid-June, I will have a defense of a diploma thesis for a bachelor’s degree (this should not interfere with GSoC), at the end of June graduation (and this too) and in mid-August, I will take exams for admission to the magistracy (this should not be reflected). I want to learn new things, write cool features and improve Postgres.--- Regards,Volkov Denis", "msg_date": "Sun, 29 Mar 2020 22:31:40 +0500", "msg_from": "Denis Volkov <volkov.denis.dev@yandex.ru>", "msg_from_op": true, "msg_subject": "[GSoC 2020] applicant proposal, Volkov Denis" } ]
[ { "msg_contents": "Hi,\nI want to apply to GSoC and this is my proposal draft. Please give me a feedback.\n---\nI am a 4-year undergraduate CS student at Ural Federal University. This fall I met PostgreSQL and posted the first PR to WAL-G. I liked it so much that I decided to do the graduate work related to WAL-G - to implement backup(s) copying from one store to another (while WIP, I'm waiting for a review). I liked writing on golang, sorting out what works, and then I accidentally found out about GSoC and decided to give it a try.\n\nI saw a list of suggested problems at https://wiki.postgresql.org/wiki/GSoC_2020#WAL-G_performance_improvements_.282020.29 and I was very interested in WAL's history consistency check and page checksum verification features.\n\nIn mid-June, I will have a defense of a diploma thesis for a bachelor’s degree (this should not interfere with GSoC), at the end of June graduation (and this too) and in mid-August, I will take exams for admission to the magistracy (this should not be reflected).\n\nI want to learn new things, write cool features and improve PostgresQL.\n---\n\nRegards,\nDenis Volkov\n\n\n", "msg_date": "Sun, 29 Mar 2020 22:54:31 +0500", "msg_from": "Denis Volkov <volkov.denis.dev@yandex.ru>", "msg_from_op": true, "msg_subject": "[GSoC 2020] applicant proposal v2, Denis Volkov" }, { "msg_contents": "Greetings,\n\n* Denis Volkov (volkov.denis.dev@yandex.ru) wrote:\n> I want to apply to GSoC and this is my proposal draft. Please give me a feedback.\n\nGreat, thanks! I'd suggest you reach out to the mentors listed for this\nproposal directly also to make sure they see your interest, and to chat\nwith them regarding your proposal.\n\nThanks again,\n\nStephen", "msg_date": "Sun, 29 Mar 2020 15:04:57 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: [GSoC 2020] applicant proposal v2, Denis Volkov" } ]
[ { "msg_contents": "Greetings hackers.\n\nWhile working on first-class support for PG collations in the Entity\nFramework Core ORM, I've come across an interesting problem: it doesn't\nseem to be possible to create a database with a collation that isn't\npredefined, and there doesn't seem to be a way to add to that list. I'm\nspecifically looking into creating a database with an ICU collation.\n\nI've seen the discussion in\nhttps://www.postgresql.org/message-id/flat/99faa8eb-9de2-8ec0-0a25-1ad1276167cc%402ndquadrant.com,\nthough to my understanding, that is about importing ICU rather than libc\npredefined collations, and not adding an arbitrary collation to the list\nfrom which databases can be created.\n\nThis seems particularly problematic as a database collation cannot be\naltered once created, leading to an odd chicken-and-egg problem. My initial\nexpectation was for collations in the template database to be taken into\naccount, but that doesn't seem to be the case.\n\nFinally, just a word to say that better support for non-deterministic\ncollations would be greatly appreciated - specifically LIKE support (though\nI realize that isn't trivial). At the moment their actual usefulness seems\nsomewhat limited because of this.\n\nThanks,\n\nShay\n\nGreetings hackers.While working on first-class support for PG collations in the Entity Framework Core ORM, I've come across an interesting problem: it doesn't seem to be possible to create a database with a collation that isn't predefined, and there doesn't seem to be a way to add to that list. I'm specifically looking into creating a database with an ICU collation.I've seen the discussion in https://www.postgresql.org/message-id/flat/99faa8eb-9de2-8ec0-0a25-1ad1276167cc%402ndquadrant.com, though to my understanding, that is about importing ICU rather than libc predefined collations, and not adding an arbitrary collation to the list from which databases can be created.This seems particularly problematic as a database collation cannot be altered once created, leading to an odd chicken-and-egg problem. My initial expectation was for collations in the template database to be taken into account, but that doesn't seem to be the case.Finally, just a word to say that better support for non-deterministic collations would be greatly appreciated - specifically LIKE support (though I realize that isn't trivial). At the moment their actual usefulness seems somewhat limited because of this.Thanks,Shay", "msg_date": "Sun, 29 Mar 2020 23:31:42 +0200", "msg_from": "Shay Rojansky <roji@roji.org>", "msg_from_op": true, "msg_subject": "Creating a database with a non-predefined collation" }, { "msg_contents": "On Sun, 2020-03-29 at 23:31 +0200, Shay Rojansky wrote:\n> While working on first-class support for PG collations in the Entity Framework Core ORM,\n> I've come across an interesting problem: it doesn't seem to be possible to create a\n> database with a collation that isn't predefined, and there doesn't seem to be a way to\n> add to that list. I'm specifically looking into creating a database with an ICU collation.\n> \n> I've seen the discussion in\n> https://www.postgresql.org/message-id/flat/99faa8eb-9de2-8ec0-0a25-1ad1276167cc%402ndquadrant.com,\n> though to my understanding, that is about importing ICU rather than libc predefined collations,\n> and not adding an arbitrary collation to the list from which databases can be created.\n> \n> This seems particularly problematic as a database collation cannot be altered once created,\n> leading to an odd chicken-and-egg problem. My initial expectation was for collations in the\n> template database to be taken into account, but that doesn't seem to be the case.\n\nThis is indeed a missing feature, and the thread you reference was trying to improve\nthings, but enough obstacles surfaced that it didn't make it.\n\nIt is less trivial that it looks at first glance.\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Mon, 30 Mar 2020 07:09:32 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Creating a database with a non-predefined collation" } ]
[ { "msg_contents": "Improve handling of parameter differences in physical replication\n\nWhen certain parameters are changed on a physical replication primary,\nthis is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL\nrecord. The standby then checks whether its own settings are at least\nas big as the ones on the primary. If not, the standby shuts down\nwith a fatal error.\n\nThe correspondence of settings between primary and standby is required\nbecause those settings influence certain shared memory sizings that\nare required for processing WAL records that the primary might send.\nFor example, if the primary sends a prepared transaction, the standby\nmust have had max_prepared_transaction set appropriately or it won't\nbe able to process those WAL records.\n\nHowever, fatally shutting down the standby immediately upon receipt of\nthe parameter change record might be a bit of an overreaction. The\nresources related to those settings are not required immediately at\nthat point, and might never be required if the activity on the primary\ndoes not exhaust all those resources. If we just let the standby roll\non with recovery, it will eventually produce an appropriate error when\nthose resources are used.\n\nSo this patch relaxes this a bit. Upon receipt of\nXLOG_PARAMETER_CHANGE, we still check the settings but only issue a\nwarning and set a global flag if there is a problem. Then when we\nactually hit the resource issue and the flag was set, we issue another\nwarning message with relevant information. At that point we pause\nrecovery, so a hot standby remains usable. We also repeat the last\nwarning message once a minute so it is harder to miss or ignore.\n\nReviewed-by: Sergei Kornilov <sk@zsrv.org>\nReviewed-by: Masahiko Sawada <masahiko.sawada@2ndquadrant.com>\nReviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nDiscussion: https://www.postgresql.org/message-id/flat/4ad69a4c-cc9b-0dfe-0352-8b1b0cd36c7b@2ndquadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/246f136e76ecd26844840f2b2057e2c87ec9868d\n\nModified Files\n--------------\ndoc/src/sgml/high-availability.sgml | 48 +++++++++++++++++------\nsrc/backend/access/transam/twophase.c | 3 ++\nsrc/backend/access/transam/xlog.c | 74 ++++++++++++++++++++++++++++++-----\nsrc/backend/storage/ipc/procarray.c | 9 ++++-\nsrc/backend/storage/lmgr/lock.c | 10 +++++\nsrc/include/access/xlog.h | 1 +\n6 files changed, 122 insertions(+), 23 deletions(-)", "msg_date": "Mon, 30 Mar 2020 07:58:54 +0000", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": true, "msg_subject": "pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "\n\nOn 2020/03/30 16:58, Peter Eisentraut wrote:\n> Improve handling of parameter differences in physical replication\n> \n> When certain parameters are changed on a physical replication primary,\n> this is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL\n> record. The standby then checks whether its own settings are at least\n> as big as the ones on the primary. If not, the standby shuts down\n> with a fatal error.\n> \n> The correspondence of settings between primary and standby is required\n> because those settings influence certain shared memory sizings that\n> are required for processing WAL records that the primary might send.\n> For example, if the primary sends a prepared transaction, the standby\n> must have had max_prepared_transaction set appropriately or it won't\n> be able to process those WAL records.\n> \n> However, fatally shutting down the standby immediately upon receipt of\n> the parameter change record might be a bit of an overreaction. The\n> resources related to those settings are not required immediately at\n> that point, and might never be required if the activity on the primary\n> does not exhaust all those resources. If we just let the standby roll\n> on with recovery, it will eventually produce an appropriate error when\n> those resources are used.\n> \n> So this patch relaxes this a bit. Upon receipt of\n> XLOG_PARAMETER_CHANGE, we still check the settings but only issue a\n> warning and set a global flag if there is a problem. Then when we\n> actually hit the resource issue and the flag was set, we issue another\n> warning message with relevant information. At that point we pause\n> recovery, so a hot standby remains usable. We also repeat the last\n> warning message once a minute so it is harder to miss or ignore.\n\nI encountered the trouble maybe related to this commit.\n\nFirstly I set up the master and the standby with max_connections=100 (default value).\nThen I decreased max_connections to 1 only in the standby and restarted\nthe server. Thanks to the commit, I saw the following warning message\nin the standby.\n\n WARNING: insufficient setting for parameter max_connections\n DETAIL: max_connections = 1 is a lower setting than on the master server (where its value was 100).\n HINT: Change parameters and restart the server, or there may be resource exhaustion errors sooner or later.\n\nThen I made the script that inserted 1,000,000 rows in one transaction,\nand ran it 30 times at the same time. That is, 30 transactions inserting\nlots of rows were running at the same time.\n\nI confirmed that there are expected number of rows in the master,\nbut found 0 row in the standby unxpectedly. Also I suspected that issue\nhappened because recovery is paused, but pg_is_wal_replay_paused()\nreturned false in the standby.\n\nIsn't this the trouble related to this commit?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Mon, 30 Mar 2020 19:41:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "Hi,\n\nOn 2020-03-30 19:41:43 +0900, Fujii Masao wrote:\n> On 2020/03/30 16:58, Peter Eisentraut wrote:\n> > Improve handling of parameter differences in physical replication\n> > \n> > When certain parameters are changed on a physical replication primary,\n> > this is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL\n> > record. The standby then checks whether its own settings are at least\n> > as big as the ones on the primary. If not, the standby shuts down\n> > with a fatal error.\n> > \n> > The correspondence of settings between primary and standby is required\n> > because those settings influence certain shared memory sizings that\n> > are required for processing WAL records that the primary might send.\n> > For example, if the primary sends a prepared transaction, the standby\n> > must have had max_prepared_transaction set appropriately or it won't\n> > be able to process those WAL records.\n> > \n> > However, fatally shutting down the standby immediately upon receipt of\n> > the parameter change record might be a bit of an overreaction. The\n> > resources related to those settings are not required immediately at\n> > that point, and might never be required if the activity on the primary\n> > does not exhaust all those resources. If we just let the standby roll\n> > on with recovery, it will eventually produce an appropriate error when\n> > those resources are used.\n> > \n> > So this patch relaxes this a bit. Upon receipt of\n> > XLOG_PARAMETER_CHANGE, we still check the settings but only issue a\n> > warning and set a global flag if there is a problem. Then when we\n> > actually hit the resource issue and the flag was set, we issue another\n> > warning message with relevant information.\n\nI find it somewhat hostile that we don't display the actual resource\nerror once the problem is hit - we just pause. Sure, there's going to be\nsome previous log entry explaining what the actual parameter difference\nis - but that could have been weeks ago. So either hard to find, or even\nrotated out.\n\n\n> > At that point we pause recovery, so a hot standby remains usable.\n> > We also repeat the last warning message once a minute so it is\n> > harder to miss or ignore.\n\n\nI can't really imagine that the adjustments made in this patch are\nsufficient.\n\nOne important issue seems to me to be the size of the array that\nTransactionIdIsInProgress() allocates:\n\t/*\n\t * If first time through, get workspace to remember main XIDs in. We\n\t * malloc it permanently to avoid repeated palloc/pfree overhead.\n\t */\n\tif (xids == NULL)\n\t{\n\t\t/*\n\t\t * In hot standby mode, reserve enough space to hold all xids in the\n\t\t * known-assigned list. If we later finish recovery, we no longer need\n\t\t * the bigger array, but we don't bother to shrink it.\n\t\t */\n\t\tint\t\t\tmaxxids = RecoveryInProgress() ? TOTAL_MAX_CACHED_SUBXIDS : arrayP->maxProcs;\n\n\t\txids = (TransactionId *) malloc(maxxids * sizeof(TransactionId));\n\t\tif (xids == NULL)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n\t\t\t\t\t errmsg(\"out of memory\")));\n\t}\n\nWhich I think means we'll just overrun the xids array in some cases,\ne.g. if KnownAssignedXids overflowed. Obviously we should have a\ncrosscheck in the code (which we don't), but it was previously a\nsupposedly unreachable path.\n\nSimilarly, the allocation in GetSnapshotData() will be too small, I\nthink:\n\tif (snapshot->xip == NULL)\n\t{\n\t\t/*\n\t\t * First call for this snapshot. Snapshot is same size whether or not\n\t\t * we are in recovery, see later comments.\n\t\t */\n\t\tsnapshot->xip = (TransactionId *)\n\t\t\tmalloc(GetMaxSnapshotXidCount() * sizeof(TransactionId));\n\t\tif (snapshot->xip == NULL)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n\t\t\t\t\t errmsg(\"out of memory\")));\n\t\tAssert(snapshot->subxip == NULL);\n\t\tsnapshot->subxip = (TransactionId *)\n\t\t\tmalloc(GetMaxSnapshotSubxidCount() * sizeof(TransactionId));\n\t\tif (snapshot->subxip == NULL)\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n\t\t\t\t\t errmsg(\"out of memory\")));\n\t}\n\nI think basically all code using TOTAL_MAX_CACHED_SUBXIDS,\nGetMaxSnapshotSubxidCount(), PROCARRAY_MAXPROCS needs to be reviewed\nmuch more carefully than done here.\n\n\nAlso, shouldn't dynahash be adjusted as well? There's e.g. the\nfollowing HASH_ENTER path:\n\t\t\t\t/* report a generic message */\n\t\t\t\tif (hashp->isshared)\n\t\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n\t\t\t\t\t\t\t errmsg(\"out of shared memory\")));\n\n\nI'm also not sure it's ok to not have the waiting in\nProcArrayAdd(). Is it guaranteed that can't be hit now, due to the WAL\nreplay path sometimes adding procs?\n\n\nHow is it safe to just block in StandbyParamErrorPauseRecovery() before\nraising an error? The error would have released lwlocks, but now we\ndon't? If we e.g. block in PrepareRedoAdd() we'll continue to hold\nTwoPhaseStateLock(), but shutting the database down might also acquire\nthat (CheckPointTwoPhase()). Similar with the error in\nKnownAssignedXidsAdd().\n\n\nThis does not seem ready.\n\n\n> I encountered the trouble maybe related to this commit.\n> \n> Firstly I set up the master and the standby with max_connections=100 (default value).\n> Then I decreased max_connections to 1 only in the standby and restarted\n> the server. Thanks to the commit, I saw the following warning message\n> in the standby.\n> \n> WARNING: insufficient setting for parameter max_connections\n> DETAIL: max_connections = 1 is a lower setting than on the master server (where its value was 100).\n> HINT: Change parameters and restart the server, or there may be resource exhaustion errors sooner or later.\n> \n> Then I made the script that inserted 1,000,000 rows in one transaction,\n> and ran it 30 times at the same time. That is, 30 transactions inserting\n> lots of rows were running at the same time.\n> \n> I confirmed that there are expected number of rows in the master,\n> but found 0 row in the standby unxpectedly.\n\nHave the relevant records actually been replicated?\n\n\n> Also I suspected that issue\n> happened because recovery is paused, but pg_is_wal_replay_paused()\n> returned false in the standby.\n\nAs far as I understand the commit it shouldn't cause a pause until\nthere's an actual resource exhaustion - in which case there should be\nanother message first (WARNING: recovery paused because of insufficient\nparameter settings).\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 11:10:30 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "\n\nOn 2020/03/31 3:10, Andres Freund wrote:\n> Hi,\n> \n> On 2020-03-30 19:41:43 +0900, Fujii Masao wrote:\n>> On 2020/03/30 16:58, Peter Eisentraut wrote:\n>>> Improve handling of parameter differences in physical replication\n>>>\n>>> When certain parameters are changed on a physical replication primary,\n>>> this is communicated to standbys using the XLOG_PARAMETER_CHANGE WAL\n>>> record. The standby then checks whether its own settings are at least\n>>> as big as the ones on the primary. If not, the standby shuts down\n>>> with a fatal error.\n>>>\n>>> The correspondence of settings between primary and standby is required\n>>> because those settings influence certain shared memory sizings that\n>>> are required for processing WAL records that the primary might send.\n>>> For example, if the primary sends a prepared transaction, the standby\n>>> must have had max_prepared_transaction set appropriately or it won't\n>>> be able to process those WAL records.\n>>>\n>>> However, fatally shutting down the standby immediately upon receipt of\n>>> the parameter change record might be a bit of an overreaction. The\n>>> resources related to those settings are not required immediately at\n>>> that point, and might never be required if the activity on the primary\n>>> does not exhaust all those resources. If we just let the standby roll\n>>> on with recovery, it will eventually produce an appropriate error when\n>>> those resources are used.\n>>>\n>>> So this patch relaxes this a bit. Upon receipt of\n>>> XLOG_PARAMETER_CHANGE, we still check the settings but only issue a\n>>> warning and set a global flag if there is a problem. Then when we\n>>> actually hit the resource issue and the flag was set, we issue another\n>>> warning message with relevant information.\n> \n> I find it somewhat hostile that we don't display the actual resource\n> error once the problem is hit - we just pause. Sure, there's going to be\n> some previous log entry explaining what the actual parameter difference\n> is - but that could have been weeks ago. So either hard to find, or even\n> rotated out.\n> \n> \n>>> At that point we pause recovery, so a hot standby remains usable.\n>>> We also repeat the last warning message once a minute so it is\n>>> harder to miss or ignore.\n> \n> \n> I can't really imagine that the adjustments made in this patch are\n> sufficient.\n> \n> One important issue seems to me to be the size of the array that\n> TransactionIdIsInProgress() allocates:\n> \t/*\n> \t * If first time through, get workspace to remember main XIDs in. We\n> \t * malloc it permanently to avoid repeated palloc/pfree overhead.\n> \t */\n> \tif (xids == NULL)\n> \t{\n> \t\t/*\n> \t\t * In hot standby mode, reserve enough space to hold all xids in the\n> \t\t * known-assigned list. If we later finish recovery, we no longer need\n> \t\t * the bigger array, but we don't bother to shrink it.\n> \t\t */\n> \t\tint\t\t\tmaxxids = RecoveryInProgress() ? TOTAL_MAX_CACHED_SUBXIDS : arrayP->maxProcs;\n> \n> \t\txids = (TransactionId *) malloc(maxxids * sizeof(TransactionId));\n> \t\tif (xids == NULL)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \t\t\t\t\t errmsg(\"out of memory\")));\n> \t}\n> \n> Which I think means we'll just overrun the xids array in some cases,\n> e.g. if KnownAssignedXids overflowed. Obviously we should have a\n> crosscheck in the code (which we don't), but it was previously a\n> supposedly unreachable path.\n> \n> Similarly, the allocation in GetSnapshotData() will be too small, I\n> think:\n> \tif (snapshot->xip == NULL)\n> \t{\n> \t\t/*\n> \t\t * First call for this snapshot. Snapshot is same size whether or not\n> \t\t * we are in recovery, see later comments.\n> \t\t */\n> \t\tsnapshot->xip = (TransactionId *)\n> \t\t\tmalloc(GetMaxSnapshotXidCount() * sizeof(TransactionId));\n> \t\tif (snapshot->xip == NULL)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \t\t\t\t\t errmsg(\"out of memory\")));\n> \t\tAssert(snapshot->subxip == NULL);\n> \t\tsnapshot->subxip = (TransactionId *)\n> \t\t\tmalloc(GetMaxSnapshotSubxidCount() * sizeof(TransactionId));\n> \t\tif (snapshot->subxip == NULL)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \t\t\t\t\t errmsg(\"out of memory\")));\n> \t}\n> \n> I think basically all code using TOTAL_MAX_CACHED_SUBXIDS,\n> GetMaxSnapshotSubxidCount(), PROCARRAY_MAXPROCS needs to be reviewed\n> much more carefully than done here.\n> \n> \n> Also, shouldn't dynahash be adjusted as well? There's e.g. the\n> following HASH_ENTER path:\n> \t\t\t\t/* report a generic message */\n> \t\t\t\tif (hashp->isshared)\n> \t\t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \t\t\t\t\t\t\t errmsg(\"out of shared memory\")));\n> \n> \n> I'm also not sure it's ok to not have the waiting in\n> ProcArrayAdd(). Is it guaranteed that can't be hit now, due to the WAL\n> replay path sometimes adding procs?\n> \n> \n> How is it safe to just block in StandbyParamErrorPauseRecovery() before\n> raising an error? The error would have released lwlocks, but now we\n> don't? If we e.g. block in PrepareRedoAdd() we'll continue to hold\n> TwoPhaseStateLock(), but shutting the database down might also acquire\n> that (CheckPointTwoPhase()). Similar with the error in\n> KnownAssignedXidsAdd().\n> \n> \n> This does not seem ready.\n> \n> \n>> I encountered the trouble maybe related to this commit.\n>>\n>> Firstly I set up the master and the standby with max_connections=100 (default value).\n>> Then I decreased max_connections to 1 only in the standby and restarted\n>> the server. Thanks to the commit, I saw the following warning message\n>> in the standby.\n>>\n>> WARNING: insufficient setting for parameter max_connections\n>> DETAIL: max_connections = 1 is a lower setting than on the master server (where its value was 100).\n>> HINT: Change parameters and restart the server, or there may be resource exhaustion errors sooner or later.\n>>\n>> Then I made the script that inserted 1,000,000 rows in one transaction,\n>> and ran it 30 times at the same time. That is, 30 transactions inserting\n>> lots of rows were running at the same time.\n>>\n>> I confirmed that there are expected number of rows in the master,\n>> but found 0 row in the standby unxpectedly.\n> \n> Have the relevant records actually been replicated?\n\nYeah, I thought I confirmed that, but when I tried to reproduce\nthe issue in the clean env, I failed to do that. So which means\nthat I did mistake... :( Sorry for noise..\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Tue, 31 Mar 2020 10:16:03 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "On Mon, Mar 30, 2020 at 2:10 PM Andres Freund <andres@anarazel.de> wrote:\n> One important issue seems to me to be the size of the array that\n> TransactionIdIsInProgress() allocates:\n> /*\n> * If first time through, get workspace to remember main XIDs in. We\n> * malloc it permanently to avoid repeated palloc/pfree overhead.\n> */\n> if (xids == NULL)\n> {\n> /*\n> * In hot standby mode, reserve enough space to hold all xids in the\n> * known-assigned list. If we later finish recovery, we no longer need\n> * the bigger array, but we don't bother to shrink it.\n> */\n> int maxxids = RecoveryInProgress() ? TOTAL_MAX_CACHED_SUBXIDS : arrayP->maxProcs;\n>\n> xids = (TransactionId *) malloc(maxxids * sizeof(TransactionId));\n> if (xids == NULL)\n> ereport(ERROR,\n> (errcode(ERRCODE_OUT_OF_MEMORY),\n> errmsg(\"out of memory\")));\n> }\n>\n> Which I think means we'll just overrun the xids array in some cases,\n> e.g. if KnownAssignedXids overflowed. Obviously we should have a\n> crosscheck in the code (which we don't), but it was previously a\n> supposedly unreachable path.\n\nI think this patch needs to be reverted. The only places where it\nchanges anything are places where we were about to throw some error\nanyway. But as Andres's analysis shows, that's not nearly good enough.\nI am kind of surprised that Peter thought that would be good enough.\nIt is necessary, for something like this, to investigate all the\nplaces where the code may be relying on a certain assumption, not just\nassume that there's an error check everywhere that we rely on that\nassumption and change only those places.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 3 Apr 2020 13:55:09 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "On 2020-04-03 19:55, Robert Haas wrote:> I think this patch needs to be \nreverted. The only places where it\n> changes anything are places where we were about to throw some error\n> anyway. But as Andres's analysis shows, that's not nearly good enough.\n\nOK, reverted.\n\n\n", "msg_date": "Sat, 4 Apr 2020 09:11:25 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" }, { "msg_contents": "On 2020-03-30 20:10, Andres Freund wrote:\n> Also, shouldn't dynahash be adjusted as well? There's e.g. the\n> following HASH_ENTER path:\n> \t\t\t\t/* report a generic message */\n> \t\t\t\tif (hashp->isshared)\n> \t\t\t\t\tereport(ERROR,\n> \t\t\t\t\t\t\t(errcode(ERRCODE_OUT_OF_MEMORY),\n> \t\t\t\t\t\t\t errmsg(\"out of shared memory\")));\n\nCould you explain further what you mean by this? I don't understand how \nthis is related.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 14 Apr 2020 09:17:39 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Improve handling of parameter differences in physical\n replicatio" } ]
[ { "msg_contents": "Hello hackers,\r\n\r\nI find a small issue in XLogInsertRecord() function: it checks whether it's able\r\nto insert a wal by XLogInsertAllowed(), when refused it reports an error \"cannot\r\nmake new WAL entries during recovery\".\r\n\r\nI notice that XLogInsertAllowed() reject to insert a wal record not only sometimes\r\nin recovery status, but also after a 'shutdown' checkpoint. So if it's better to\r\nadd an error report message \"cannot make new WAL entries during shutdown\" which I\r\nmake a patch attached.\r\n\r\n Best regards,\r\n\r\n\r\nHighgo Software (Canada/China/Pakistan) \r\nURL : www.highgo.ca \r\nEMAIL: mailto:movead(dot)li(at)highgo(dot)ca", "msg_date": "Mon, 30 Mar 2020 16:58:56 +0800", "msg_from": "\"movead.li@highgo.ca\" <movead.li@highgo.ca>", "msg_from_op": true, "msg_subject": "wal_insert_waring_issue" } ]
[ { "msg_contents": "Hi all,\n\nWhile playing around with Peter E.'s unicode normalization patch [1],\nI found that HEAD failed to build a perfect hash function for any of\nthe four sets of 4-byte keys ranging from 1k to 17k in number. It\nprobably doesn't help that codepoints have nul bytes and often cluster\ninto consecutive ranges. In addition, I found that a couple of the\ncandidate hash multipliers don't compile to shift-and-add\ninstructions, although they were chosen with that intent in mind. It\nseems compilers will only do that if the number is exactly 2^n +/- 1.\n\nUsing the latest gcc and clang, I tested all prime numbers up to 5000\n(plus 8191 for good measure), and found a handful that are compiled\ninto non-imul instructions. Dialing back the version, gcc 4.8 and\nclang 7.0 are the earliest I found that have the same behavior as\nnewer ones. For reference:\n\nhttps://gcc.godbolt.org/z/bxcXHu\n\nIn addition to shift-and-add, there are also a few using lea,\nlea-and-add, or 2 leas.\n\nThen I used the attached program to measure various combinations of\ncompiled instructions using two constant multipliers iterating over\nbytes similar to a generated hash function.\n\n<cc> -O2 -Wall test-const-mult.c test-const-mult-2.c\n./a.out\nMedian of 3 with clang 10:\n\n lea, lea 0.181s\n\n lea, lea+add 0.248s\n lea, shift+add 0.251s\n\n lea+add, shift+add 0.273s\nshift+add, shift+add 0.276s\n\n 2 leas, 2 leas 0.290s\n shift+add, imul 0.329s\n\nTaking this with a grain of salt, it nonetheless seems plausible that\na single lea could be faster than any two instructions here. The only\nprimes that compile to a single lea are 3 and 5, but I've found those\nmultipliers can build hash functions for all our keyword lists, as\ndemonstration. None of the others we didn't have already are\nparticularly interesting from a performance point of view.\n\nWith the unicode quick check, I found that the larger sets need (257,\n8191) as multipliers to build the hash table, and none of the smaller\nspecial primes I tested will work.\n\nKeeping these two properties in mind, I came up with the scheme in the\nattached patch that tries adjacent pairs in this array:\n\n(3, 5, 17, 31, 127, 257, 8191)\n\nso that we try (3,5) first, next (5,17), and then all the pure\nshift-and-adds with (257,8191) last.\n\nThe main motivation is to be able to build the unicode quick check\ntables, but if we ever use this functionality in a hot code path, we\nmay as well try to shave a few more cycles while we're at it.\n\n[1] https://www.postgresql.org/message-id/flat/c1909f27-c269-2ed9-12f8-3ab72c8caf7a@2ndquadrant.com\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 30 Mar 2020 21:33:14 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "tweaking perfect hash multipliers" }, { "msg_contents": "Hi,\n\nOn 2020-03-30 21:33:14 +0800, John Naylor wrote:\n> Then I used the attached program to measure various combinations of\n> compiled instructions using two constant multipliers iterating over\n> bytes similar to a generated hash function.\n\nIt looks like you didn't attach the program?\n\n\n> <cc> -O2 -Wall test-const-mult.c test-const-mult-2.c\n> ./a.out\n> Median of 3 with clang 10:\n> \n> lea, lea 0.181s\n> \n> lea, lea+add 0.248s\n> lea, shift+add 0.251s\n> \n> lea+add, shift+add 0.273s\n> shift+add, shift+add 0.276s\n> \n> 2 leas, 2 leas 0.290s\n> shift+add, imul 0.329s\n> \n> Taking this with a grain of salt, it nonetheless seems plausible that\n> a single lea could be faster than any two instructions here.\n\nIt's a bit complicated by the fact that there's more execution ports to\nexecute shift/add than there ports to compute some form of leas. And\nsome of that won't easily be measurable in a micro-benchmark, because\nthere'll be dependencies between the instruction preventing any\ninstruction level parallelism.\n\nI think the form of lea generated here is among the ones that can only\nbe executed on port 1. Whereas e.g. an register+register/immediate add\ncan be executed on four different ports.\n\nThere's also a significant difference in latency that you might not see\nin your benchmark. E.g. on coffee lake the relevant form of lea has a\nlatency of three cycles, but one independent lea can be \"started\" per\ncycle (agner calls this \"reciprocal throughput). Whereas a shift has a\nlatency of 1 cycle and a reciprocal throughput of 0.5 (lower is better),\nadd has a latency o 1 and a reciprocal throughput of 0.25.\n\nSee the tables in https://www.agner.org/optimize/instruction_tables.pdf\n\nI'm not really sure my musings above matter terribly much, but I just\nwanted to point out why I'd not take too much stock in the above timings\nin isolation. Even a very high latency wouldn't necessarily be penalized\nin a benchmark with one loop iteration independent from each other, but\nwould matter in the real world.\n\n\nCool work!\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 11:31:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: tweaking perfect hash multipliers" }, { "msg_contents": "On Tue, Mar 31, 2020 at 2:31 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2020-03-30 21:33:14 +0800, John Naylor wrote:\n> > Then I used the attached program to measure various combinations of\n> > compiled instructions using two constant multipliers iterating over\n> > bytes similar to a generated hash function.\n>\n> It looks like you didn't attach the program?\n\nFunny, I did, but then decided to rename the files. Here they are. I\ntried to make the loop similar to how it'd be in the actual hash\nfunction, but leaving out the post-loop modulus and array access. Each\nloop iteration is dependent on the last one's result.\n\n> It's a bit complicated by the fact that there's more execution ports to\n> execute shift/add than there ports to compute some form of leas. And\n> some of that won't easily be measurable in a micro-benchmark, because\n> there'll be dependencies between the instruction preventing any\n> instruction level parallelism.\n>\n> I think the form of lea generated here is among the ones that can only\n> be executed on port 1. Whereas e.g. an register+register/immediate add\n> can be executed on four different ports.\n\nThat's interesting, I'll have to look into that.\n\nThanks for the info!\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Tue, 31 Mar 2020 03:10:59 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: tweaking perfect hash multipliers" }, { "msg_contents": "On Tue, Mar 31, 2020 at 2:31 AM Andres Freund <andres@anarazel.de> wrote:\n> I think the form of lea generated here is among the ones that can only\n> be executed on port 1. Whereas e.g. an register+register/immediate add\n> can be executed on four different ports.\n\nI looked into slow vs. fast leas, and I think the above are actually\nfast because they have 2 operands.\n\nleal (%rdi,%rdi,2), %eax\n\nA 3-op lea would look like this:\n\nleal 42(%rdi,%rdi,8), %ecx\n\nIn other words, the scale doesn't count as an operand. Although I've\nseen in a couple places say that a non-1 scale adds a cycle of latency\nfor some AMD chips.\n\nSome interesting discussion in these LLVM commits and discussion from\n2017 about avoiding slow leas:\n\nhttps://reviews.llvm.org/D32277\nhttps://reviews.llvm.org/D32352\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:05:55 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: tweaking perfect hash multipliers" }, { "msg_contents": "On Tue, Mar 31, 2020 at 4:05 PM John Naylor <john.naylor@2ndquadrant.com> wrote:\n>\n> On Tue, Mar 31, 2020 at 2:31 AM Andres Freund <andres@anarazel.de> wrote:\n> > I think the form of lea generated here is among the ones that can only\n> > be executed on port 1. Whereas e.g. an register+register/immediate add\n> > can be executed on four different ports.\n>\n> I looked into slow vs. fast leas, and I think the above are actually\n> fast because they have 2 operands.\n\nNo, scratch that, it seems the two forms of lea are:\n\nleal (,%rdx,8), %ecx\nleal (%rdx,%rdx,8), %ecx\n\nThe first operand in both is the implicit zero, so with 3 and 5 we do\nget the slow lea on some architectures. So I've only kept the\nshift-and-add multipliers in v2. I also changed the order of iteration\nof the parameters, for speed. Before, it took over 30 seconds to build\nthe unicode quick check tables, now it takes under 2 seconds.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Thu, 2 Apr 2020 02:54:43 +0800", "msg_from": "John Naylor <john.naylor@2ndquadrant.com>", "msg_from_op": true, "msg_subject": "Re: tweaking perfect hash multipliers" } ]
[ { "msg_contents": "Hi,\n\nThe Release Management Team (RMT) for the PostgreSQL 13 is assembled and\nhas determined that the feature freeze date for the PostgreSQL 11\nrelease will be April 7, 2020. This means that any feature for the\nPostgreSQL 13 release **must be committed by April 7, 2020 AOE**\n(\"anywhere on earth\")[1]. In other words, by April 8, it is too late.\n\nThis naturally extends the March 2020 Commitfest to April 7, 2020. After\nthe freeze is in effect, any open feature in the current Commitfest will\nbe moved into the subsequent one.\n\nOpen items for the PostgreSQL 13 release will be tracked here:\n\n\thttps://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items\n\nFor the PostgreSQL 13 release, the release management team is composed\nof:\n\n\n Peter Geoghegan <pg@bowt.ie>\n Alvaro Herrera <alvherre@2ndquadrant.com>\n Jonathan Katz <jkatz@postgresql.org>\n\nFor the time being, if you have any questions about the process, please\nfeel free to email any member of the RMT. We will send out notes with\nupdates and additional guidance in the near future.\n\nThanks!\n\nJonathan\n\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Mon, 30 Mar 2020 10:18:03 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "PostgreSQL 13 Feature Freeze + Release Management Team (RMT)" }, { "msg_contents": "On 3/30/20 10:18 AM, Jonathan S. Katz wrote:\n> Hi,\n> \n> The Release Management Team (RMT) for the PostgreSQL 13 is assembled and\n> has determined that the feature freeze date for the PostgreSQL 11\n> release will be April 7, 2020. This means that any feature for the\n> PostgreSQL 13 release **must be committed by April 7, 2020 AOE**\n> (\"anywhere on earth\")[1]. In other words, by April 8, it is too late.\n> \n> This naturally extends the March 2020 Commitfest to April 7, 2020. After\n> the freeze is in effect, any open feature in the current Commitfest will\n> be moved into the subsequent one.\n\nAs a reminder, the commit fest and feature freeze for PostgreSQL 13 (not\n11) starts at '2020-04-08T00:00:00-12'::timestamptz -- all new features\nfor PostgreSQL 13 must be committed by then.\n\nAll the best for the final push. I have seen some very exciting patches\nland and PostgreSQL 13 is shaping up to be another excellent release!\n\nJonathan", "msg_date": "Mon, 6 Apr 2020 09:31:23 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL 13 Feature Freeze + Release Management Team (RMT)" } ]
[ { "msg_contents": "Hi,\nThis patch remove reassigned values, with safety.\n\nPlancat.c, needs a more careful review.\n\nBest regards\nRanier Vilela", "msg_date": "Mon, 30 Mar 2020 11:29:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Remove some reassigned values." } ]
[ { "msg_contents": "Hi\n\nwhen I was in talk with Silvio Moioli, I found strange hash join. Hash was\ncreated from bigger table.\n\nhttps://www.postgresql.org/message-id/79dd683d-3296-1b21-ab4a-28fdc2d98807%40suse.de\n\nNow it looks so materialized CTE disallow hash\n\n\ncreate table bigger(a int);\ncreate table smaller(a int);\ninsert into bigger select random()* 10000 from generate_series(1,100000);\ninsert into smaller select i from generate_series(1,100000) g(i);\n\nanalyze bigger, smaller;\n\n-- no problem\nexplain analyze select * from bigger b join smaller s on b.a = s.a;\n\npostgres=# explain analyze select * from bigger b join smaller s on b.a =\ns.a;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=3084.00..7075.00 rows=100000 width=8) (actual\ntime=32.937..87.276 rows=99994 loops=1)\n Hash Cond: (b.a = s.a)\n -> Seq Scan on bigger b (cost=0.00..1443.00 rows=100000 width=4)\n(actual time=0.028..8.546 rows=100000 loops=1)\n -> Hash (cost=1443.00..1443.00 rows=100000 width=4) (actual\ntime=32.423..32.423 rows=100000 loops=1)\n Buckets: 131072 Batches: 2 Memory Usage: 2785kB\n -> Seq Scan on smaller s (cost=0.00..1443.00 rows=100000\nwidth=4) (actual time=0.025..9.931 rows=100000 loops=1)\n Planning Time: 0.438 ms\n Execution Time: 91.193 ms\n(8 rows)\n\nbut with materialized CTE\n\npostgres=# explain analyze with b as materialized (select * from bigger), s\nas materialized (select * from smaller) select * from b join s on b.a = s.a;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=23495.64..773995.64 rows=50000000 width=8) (actual\ntime=141.242..193.375 rows=99994 loops=1)\n Merge Cond: (b.a = s.a)\n CTE b\n -> Seq Scan on bigger (cost=0.00..1443.00 rows=100000 width=4)\n(actual time=0.026..11.083 rows=100000 loops=1)\n CTE s\n -> Seq Scan on smaller (cost=0.00..1443.00 rows=100000 width=4)\n(actual time=0.015..9.161 rows=100000 loops=1)\n -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\ntime=78.775..90.953 rows=100000 loops=1)\n Sort Key: b.a\n Sort Method: external merge Disk: 1376kB\n -> CTE Scan on b (cost=0.00..2000.00 rows=100000 width=4)\n(actual time=0.033..39.274 rows=100000 loops=1)\n -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\ntime=62.453..74.004 rows=99996 loops=1)\n Sort Key: s.a\n Sort Method: external sort Disk: 1768kB\n -> CTE Scan on s (cost=0.00..2000.00 rows=100000 width=4)\n(actual time=0.018..31.669 rows=100000 loops=1)\n Planning Time: 0.303 ms\n Execution Time: 199.919 ms\n(16 rows)\n\nIt doesn't use hash join - the estimations are perfect, but plan is\nsuboptimal\n\nRegards\n\nPavel\n\nHiwhen I was in talk with Silvio Moioli, I found strange hash join. Hash was created from bigger table.https://www.postgresql.org/message-id/79dd683d-3296-1b21-ab4a-28fdc2d98807%40suse.deNow it looks so materialized CTE disallow hashcreate table bigger(a int);create table smaller(a int);insert into bigger select random()* 10000 from generate_series(1,100000);insert into smaller select i from generate_series(1,100000) g(i);analyze bigger, smaller;-- no problemexplain analyze select * from bigger b join smaller s on b.a = s.a;postgres=# explain analyze select * from bigger b join smaller s on b.a = s.a;                                                         QUERY PLAN                                                         ---------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=3084.00..7075.00 rows=100000 width=8) (actual time=32.937..87.276 rows=99994 loops=1)   Hash Cond: (b.a = s.a)   ->  Seq Scan on bigger b  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.028..8.546 rows=100000 loops=1)   ->  Hash  (cost=1443.00..1443.00 rows=100000 width=4) (actual time=32.423..32.423 rows=100000 loops=1)         Buckets: 131072  Batches: 2  Memory Usage: 2785kB         ->  Seq Scan on smaller s  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.025..9.931 rows=100000 loops=1) Planning Time: 0.438 ms Execution Time: 91.193 ms(8 rows)but with materialized CTEpostgres=# explain analyze with b as materialized (select * from bigger), s as materialized (select * from smaller) select * from b join s on b.a = s.a;                                                      QUERY PLAN                                                      ---------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=23495.64..773995.64 rows=50000000 width=8) (actual time=141.242..193.375 rows=99994 loops=1)   Merge Cond: (b.a = s.a)   CTE b     ->  Seq Scan on bigger  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.026..11.083 rows=100000 loops=1)   CTE s     ->  Seq Scan on smaller  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.015..9.161 rows=100000 loops=1)   ->  Sort  (cost=10304.82..10554.82 rows=100000 width=4) (actual time=78.775..90.953 rows=100000 loops=1)         Sort Key: b.a         Sort Method: external merge  Disk: 1376kB         ->  CTE Scan on b  (cost=0.00..2000.00 rows=100000 width=4) (actual time=0.033..39.274 rows=100000 loops=1)   ->  Sort  (cost=10304.82..10554.82 rows=100000 width=4) (actual time=62.453..74.004 rows=99996 loops=1)         Sort Key: s.a         Sort Method: external sort  Disk: 1768kB         ->  CTE Scan on s  (cost=0.00..2000.00 rows=100000 width=4) (actual time=0.018..31.669 rows=100000 loops=1) Planning Time: 0.303 ms Execution Time: 199.919 ms(16 rows)It doesn't use hash join - the estimations are perfect, but plan is suboptimalRegardsPavel", "msg_date": "Mon, 30 Mar 2020 18:06:44 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "materialization blocks hash join" }, { "msg_contents": "po 30. 3. 2020 v 18:06 odesílatel Pavel Stehule <pavel.stehule@gmail.com>\nnapsal:\n\n> Hi\n>\n> when I was in talk with Silvio Moioli, I found strange hash join. Hash was\n> created from bigger table.\n>\n>\n> https://www.postgresql.org/message-id/79dd683d-3296-1b21-ab4a-28fdc2d98807%40suse.de\n>\n> Now it looks so materialized CTE disallow hash\n>\n>\n> create table bigger(a int);\n> create table smaller(a int);\n> insert into bigger select random()* 10000 from generate_series(1,100000);\n> insert into smaller select i from generate_series(1,100000) g(i);\n>\n> analyze bigger, smaller;\n>\n> -- no problem\n> explain analyze select * from bigger b join smaller s on b.a = s.a;\n>\n> postgres=# explain analyze select * from bigger b join smaller s on b.a =\n> s.a;\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=3084.00..7075.00 rows=100000 width=8) (actual\n> time=32.937..87.276 rows=99994 loops=1)\n> Hash Cond: (b.a = s.a)\n> -> Seq Scan on bigger b (cost=0.00..1443.00 rows=100000 width=4)\n> (actual time=0.028..8.546 rows=100000 loops=1)\n> -> Hash (cost=1443.00..1443.00 rows=100000 width=4) (actual\n> time=32.423..32.423 rows=100000 loops=1)\n> Buckets: 131072 Batches: 2 Memory Usage: 2785kB\n> -> Seq Scan on smaller s (cost=0.00..1443.00 rows=100000\n> width=4) (actual time=0.025..9.931 rows=100000 loops=1)\n> Planning Time: 0.438 ms\n> Execution Time: 91.193 ms\n> (8 rows)\n>\n> but with materialized CTE\n>\n> postgres=# explain analyze with b as materialized (select * from bigger),\n> s as materialized (select * from smaller) select * from b join s on b.a =\n> s.a;\n> QUERY PLAN\n>\n>\n> ----------------------------------------------------------------------------------------------------------------------\n> Merge Join (cost=23495.64..773995.64 rows=50000000 width=8) (actual\n> time=141.242..193.375 rows=99994 loops=1)\n> Merge Cond: (b.a = s.a)\n> CTE b\n> -> Seq Scan on bigger (cost=0.00..1443.00 rows=100000 width=4)\n> (actual time=0.026..11.083 rows=100000 loops=1)\n> CTE s\n> -> Seq Scan on smaller (cost=0.00..1443.00 rows=100000 width=4)\n> (actual time=0.015..9.161 rows=100000 loops=1)\n> -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\n> time=78.775..90.953 rows=100000 loops=1)\n> Sort Key: b.a\n> Sort Method: external merge Disk: 1376kB\n> -> CTE Scan on b (cost=0.00..2000.00 rows=100000 width=4)\n> (actual time=0.033..39.274 rows=100000 loops=1)\n> -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\n> time=62.453..74.004 rows=99996 loops=1)\n> Sort Key: s.a\n> Sort Method: external sort Disk: 1768kB\n> -> CTE Scan on s (cost=0.00..2000.00 rows=100000 width=4)\n> (actual time=0.018..31.669 rows=100000 loops=1)\n> Planning Time: 0.303 ms\n> Execution Time: 199.919 ms\n> (16 rows)\n>\n> It doesn't use hash join - the estimations are perfect, but plan is\n> suboptimal\n>\n\nI was wrong, the estimation on CTE is ok, but JOIN estimation is bad\n\nMerge Join (cost=23495.64..773995.64 rows=50000000 width=8) (actual\ntime=141.242..193.375 rows=99994 loops=1)\n\n\n> Regards\n>\n> Pavel\n>\n>\n\npo 30. 3. 2020 v 18:06 odesílatel Pavel Stehule <pavel.stehule@gmail.com> napsal:Hiwhen I was in talk with Silvio Moioli, I found strange hash join. Hash was created from bigger table.https://www.postgresql.org/message-id/79dd683d-3296-1b21-ab4a-28fdc2d98807%40suse.deNow it looks so materialized CTE disallow hashcreate table bigger(a int);create table smaller(a int);insert into bigger select random()* 10000 from generate_series(1,100000);insert into smaller select i from generate_series(1,100000) g(i);analyze bigger, smaller;-- no problemexplain analyze select * from bigger b join smaller s on b.a = s.a;postgres=# explain analyze select * from bigger b join smaller s on b.a = s.a;                                                         QUERY PLAN                                                         ---------------------------------------------------------------------------------------------------------------------------- Hash Join  (cost=3084.00..7075.00 rows=100000 width=8) (actual time=32.937..87.276 rows=99994 loops=1)   Hash Cond: (b.a = s.a)   ->  Seq Scan on bigger b  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.028..8.546 rows=100000 loops=1)   ->  Hash  (cost=1443.00..1443.00 rows=100000 width=4) (actual time=32.423..32.423 rows=100000 loops=1)         Buckets: 131072  Batches: 2  Memory Usage: 2785kB         ->  Seq Scan on smaller s  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.025..9.931 rows=100000 loops=1) Planning Time: 0.438 ms Execution Time: 91.193 ms(8 rows)but with materialized CTEpostgres=# explain analyze with b as materialized (select * from bigger), s as materialized (select * from smaller) select * from b join s on b.a = s.a;                                                      QUERY PLAN                                                      ---------------------------------------------------------------------------------------------------------------------- Merge Join  (cost=23495.64..773995.64 rows=50000000 width=8) (actual time=141.242..193.375 rows=99994 loops=1)   Merge Cond: (b.a = s.a)   CTE b     ->  Seq Scan on bigger  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.026..11.083 rows=100000 loops=1)   CTE s     ->  Seq Scan on smaller  (cost=0.00..1443.00 rows=100000 width=4) (actual time=0.015..9.161 rows=100000 loops=1)   ->  Sort  (cost=10304.82..10554.82 rows=100000 width=4) (actual time=78.775..90.953 rows=100000 loops=1)         Sort Key: b.a         Sort Method: external merge  Disk: 1376kB         ->  CTE Scan on b  (cost=0.00..2000.00 rows=100000 width=4) (actual time=0.033..39.274 rows=100000 loops=1)   ->  Sort  (cost=10304.82..10554.82 rows=100000 width=4) (actual time=62.453..74.004 rows=99996 loops=1)         Sort Key: s.a         Sort Method: external sort  Disk: 1768kB         ->  CTE Scan on s  (cost=0.00..2000.00 rows=100000 width=4) (actual time=0.018..31.669 rows=100000 loops=1) Planning Time: 0.303 ms Execution Time: 199.919 ms(16 rows)It doesn't use hash join - the estimations are perfect, but plan is suboptimalI was wrong, the estimation on CTE is ok, but JOIN estimation is badMerge Join  (cost=23495.64..773995.64 rows=50000000 width=8) (actual time=141.242..193.375 rows=99994 loops=1)RegardsPavel", "msg_date": "Mon, 30 Mar 2020 18:14:42 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: materialization blocks hash join" }, { "msg_contents": "On Mon, Mar 30, 2020 at 06:14:42PM +0200, Pavel Stehule wrote:\n>po 30. 3. 2020 v 18:06 odes�latel Pavel Stehule <pavel.stehule@gmail.com>\n>napsal:\n>\n>> Hi\n>>\n>> when I was in talk with Silvio Moioli, I found strange hash join. Hash was\n>> created from bigger table.\n>>\n>>\n>> https://www.postgresql.org/message-id/79dd683d-3296-1b21-ab4a-28fdc2d98807%40suse.de\n>>\n>> Now it looks so materialized CTE disallow hash\n>>\n>>\n>> create table bigger(a int);\n>> create table smaller(a int);\n>> insert into bigger select random()* 10000 from generate_series(1,100000);\n>> insert into smaller select i from generate_series(1,100000) g(i);\n>>\n>> analyze bigger, smaller;\n>>\n>> -- no problem\n>> explain analyze select * from bigger b join smaller s on b.a = s.a;\n>>\n>> postgres=# explain analyze select * from bigger b join smaller s on b.a =\n>> s.a;\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------------\n>> Hash Join (cost=3084.00..7075.00 rows=100000 width=8) (actual\n>> time=32.937..87.276 rows=99994 loops=1)\n>> Hash Cond: (b.a = s.a)\n>> -> Seq Scan on bigger b (cost=0.00..1443.00 rows=100000 width=4)\n>> (actual time=0.028..8.546 rows=100000 loops=1)\n>> -> Hash (cost=1443.00..1443.00 rows=100000 width=4) (actual\n>> time=32.423..32.423 rows=100000 loops=1)\n>> Buckets: 131072 Batches: 2 Memory Usage: 2785kB\n>> -> Seq Scan on smaller s (cost=0.00..1443.00 rows=100000\n>> width=4) (actual time=0.025..9.931 rows=100000 loops=1)\n>> Planning Time: 0.438 ms\n>> Execution Time: 91.193 ms\n>> (8 rows)\n>>\n>> but with materialized CTE\n>>\n>> postgres=# explain analyze with b as materialized (select * from bigger),\n>> s as materialized (select * from smaller) select * from b join s on b.a =\n>> s.a;\n>> QUERY PLAN\n>>\n>>\n>> ----------------------------------------------------------------------------------------------------------------------\n>> Merge Join (cost=23495.64..773995.64 rows=50000000 width=8) (actual\n>> time=141.242..193.375 rows=99994 loops=1)\n>> Merge Cond: (b.a = s.a)\n>> CTE b\n>> -> Seq Scan on bigger (cost=0.00..1443.00 rows=100000 width=4)\n>> (actual time=0.026..11.083 rows=100000 loops=1)\n>> CTE s\n>> -> Seq Scan on smaller (cost=0.00..1443.00 rows=100000 width=4)\n>> (actual time=0.015..9.161 rows=100000 loops=1)\n>> -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\n>> time=78.775..90.953 rows=100000 loops=1)\n>> Sort Key: b.a\n>> Sort Method: external merge Disk: 1376kB\n>> -> CTE Scan on b (cost=0.00..2000.00 rows=100000 width=4)\n>> (actual time=0.033..39.274 rows=100000 loops=1)\n>> -> Sort (cost=10304.82..10554.82 rows=100000 width=4) (actual\n>> time=62.453..74.004 rows=99996 loops=1)\n>> Sort Key: s.a\n>> Sort Method: external sort Disk: 1768kB\n>> -> CTE Scan on s (cost=0.00..2000.00 rows=100000 width=4)\n>> (actual time=0.018..31.669 rows=100000 loops=1)\n>> Planning Time: 0.303 ms\n>> Execution Time: 199.919 ms\n>> (16 rows)\n>>\n>> It doesn't use hash join - the estimations are perfect, but plan is\n>> suboptimal\n>>\n>\n>I was wrong, the estimation on CTE is ok, but JOIN estimation is bad\n>\n>Merge Join (cost=23495.64..773995.64 rows=50000000 width=8) (actual\n>time=141.242..193.375 rows=99994 loops=1)\n>\n\nThat's because eqjoinsel_inner won't have any statistics for either side\nof the join, so it'll use default ndistinct values (200), resulting in\nestimate of 0.5% for the join condition.\n\nBut this should not affect the choice of join algorithm, I think,\nbecause that's only the output of the join.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:51:28 +0200", "msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>", "msg_from_op": false, "msg_subject": "Re: materialization blocks hash join" }, { "msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> That's because eqjoinsel_inner won't have any statistics for either side\n> of the join, so it'll use default ndistinct values (200), resulting in\n> estimate of 0.5% for the join condition.\n\nRight.\n\n> But this should not affect the choice of join algorithm, I think,\n> because that's only the output of the join.\n\nLack of stats will also discourage use of a hash join, because the\ndefault assumption in the absence of stats is that the join column\nhas a pretty non-flat distribution, risking clumping into a few\nhash buckets. Merge join is less sensitive to the data distribution\nso it tends to come out as preferred in such cases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 30 Mar 2020 13:13:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: materialization blocks hash join" } ]
[ { "msg_contents": "Hi,\nI'm not sure that the patch is 100% correct.\nBut the fix is about expression about always true.\nBut if this patch is correct, he fix one possible bug.\n\nThe comment says:\n* Perform checking of FSM after releasing lock, the fsm is\n* approximate, after all.\n\nBut this is not what the code does, apparently it checks before unlocking.\n\nbest regards,\nRanier Vilela", "msg_date": "Mon, 30 Mar 2020 15:07:40 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] remove condition always true\n (/src/backend/access/heap/vacuumlazy.c)" }, { "msg_contents": "Hi,\n\nOn 2020-03-30 15:07:40 -0300, Ranier Vilela wrote:\n> I'm not sure that the patch is 100% correct.\n\nThis is *NOT* correct.\n\n\n> But the fix is about expression about always true.\n> But if this patch is correct, he fix one possible bug.\n> \n> The comment says:\n> * Perform checking of FSM after releasing lock, the fsm is\n> * approximate, after all.\n> \n> But this is not what the code does, apparently it checks before unlocking.\n\nNo, it doesn't. The freespace check isn't the PageIsNew(), it's the\nGetRecordedFreeSpace() call. Which happens after unlocking.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 12:05:05 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] remove condition always true\n (/src/backend/access/heap/vacuumlazy.c)" }, { "msg_contents": "Em seg., 30 de mar. de 2020 às 18:14, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2020-03-30 14:10:29 -0700, Andres Freund wrote:\n> > On 2020-03-30 17:08:01 -0300, Ranier Vilela wrote:\n> > > Em seg., 30 de mar. de 2020 às 16:05, Andres Freund <\n> andres@anarazel.de>\n> > > escreveu:\n> > >\n> > > > Hi,\n> > > >\n> > > > On 2020-03-30 15:07:40 -0300, Ranier Vilela wrote:\n> > > > > I'm not sure that the patch is 100% correct.\n> > > >\n> > > > This is *NOT* correct.\n> > > >\n> > > Anyway, the original source, still wrong.\n> > > What is the use of testing PageIsNew (page) twice in a row, if nothing\n> has\n> > > changed.\n> >\n> > Yea, that can be reduced. It's pretty harmless though.\n> >\n> > We used to require a cleanup lock (which requires dropping the lock,\n> > acquiring a cleanup lock - which allows for others to make the page be\n> > not empty) before acting on the empty page in vacuum. That's why\n> > PageIsNew() had to be checked again.\n\nWell, this is what the patch does, promove reduced and continue to check\nPageIsNew after unlock.\n\nregards,\nRanier Vilela\n\nEm seg., 30 de mar. de 2020 às 18:14, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2020-03-30 14:10:29 -0700, Andres Freund wrote:\n> On 2020-03-30 17:08:01 -0300, Ranier Vilela wrote:\n> > Em seg., 30 de mar. de 2020 às 16:05, Andres Freund <andres@anarazel.de>\n> > escreveu:\n> > \n> > > Hi,\n> > >\n> > > On 2020-03-30 15:07:40 -0300, Ranier Vilela wrote:\n> > > > I'm not sure that the patch is 100% correct.\n> > >\n> > > This is *NOT* correct.\n> > >\n> > Anyway, the original source, still wrong.\n> > What is the use of testing PageIsNew (page) twice in a row, if nothing has\n> > changed.\n> \n> Yea, that can be reduced. It's pretty harmless though.\n> \n> We used to require a cleanup lock (which requires dropping the lock,\n> acquiring a cleanup lock - which allows for others to make the page be\n> not empty) before acting on the empty page in vacuum. That's why\n> PageIsNew() had to be checked again.Well, this is what the patch does, promove reduced and continue to check PageIsNew after unlock. regards,Ranier Vilela", "msg_date": "Mon, 30 Mar 2020 19:07:30 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove condition always true\n (/src/backend/access/heap/vacuumlazy.c)" }, { "msg_contents": "Hi,\nThanks for the commit.\n\nRanier Vilela\n\nHi,Thanks for the commit.Ranier Vilela", "msg_date": "Tue, 31 Mar 2020 11:26:31 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] remove condition always true\n (/src/backend/access/heap/vacuumlazy.c)" } ]
[ { "msg_contents": "Hi,\n\nheapam_index_build_range_scan() has the following, long standing,\ncomment:\n\n\t\t/*\n\t\t * When dealing with a HOT-chain of updated tuples, we want to index\n\t\t * the values of the live tuple (if any), but index it under the TID\n\t\t * of the chain's root tuple. This approach is necessary to preserve\n\t\t * the HOT-chain structure in the heap. So we need to be able to find\n\t\t * the root item offset for every tuple that's in a HOT-chain. When\n\t\t * first reaching a new page of the relation, call\n\t\t * heap_get_root_tuples() to build a map of root item offsets on the\n\t\t * page.\n\t\t *\n\t\t * It might look unsafe to use this information across buffer\n\t\t * lock/unlock. However, we hold ShareLock on the table so no\n\t\t * ordinary insert/update/delete should occur; and we hold pin on the\n\t\t * buffer continuously while visiting the page, so no pruning\n\t\t * operation can occur either.\n\t\t *\n\t\t * Also, although our opinions about tuple liveness could change while\n\t\t * we scan the page (due to concurrent transaction commits/aborts),\n\t\t * the chain root locations won't, so this info doesn't need to be\n\t\t * rebuilt after waiting for another transaction.\n\t\t *\n\t\t * Note the implied assumption that there is no more than one live\n\t\t * tuple per HOT-chain --- else we could create more than one index\n\t\t * entry pointing to the same root tuple.\n\t\t */\n\nI don't think the second paragraph has been true for a *long* time. At\nleast since CREATE INDEX CONCURRENTLY was introduced.\n\nThere's also:\n\t\t\t/*\n\t\t\t * We could possibly get away with not locking the buffer here,\n\t\t\t * since caller should hold ShareLock on the relation, but let's\n\t\t\t * be conservative about it. (This remark is still correct even\n\t\t\t * with HOT-pruning: our pin on the buffer prevents pruning.)\n\t\t\t */\n\t\t\tLockBuffer(hscan->rs_cbuf, BUFFER_LOCK_SHARE);\n\nand\n\t\t\t\t\t/*\n\t\t\t\t\t * Since caller should hold ShareLock or better, normally\n\t\t\t\t\t * the only way to see this is if it was inserted earlier\n\t\t\t\t\t * in our own transaction. However, it can happen in\n\t\t\t\t\t * system catalogs, since we tend to release write lock\n\t\t\t\t\t * before commit there. Give a warning if neither case\n\t\t\t\t\t * applies.\n\t\t\t\t\t */\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 30 Mar 2020 18:40:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Very outdated comments in heapam_index_build_range_scan." } ]
[ { "msg_contents": "https://docs.google.com/document/d/1t35T0orZP7QLsOIquVBf0YNp7X_0aak18SFdNAo5hlQ/edit#\n\n\n\n\n\n\n\n\nhttps://docs.google.com/document/d/1t35T0orZP7QLsOIquVBf0YNp7X_0aak18SFdNAo5hlQ/edit#", "msg_date": "Tue, 31 Mar 2020 06:18:09 +0000", "msg_from": "Ankil Patel <ankil.patel@edu.uwaterloo.ca>", "msg_from_op": true, "msg_subject": "Gsoc Draft Proposal" } ]
[ { "msg_contents": "A collection of random typos in docs and comments spotted while working around\nthe tree.\n\ncheers ./daniel", "msg_date": "Tue, 31 Mar 2020 15:37:10 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Random set of typos spotted" }, { "msg_contents": "On Tue, Mar 31, 2020 at 3:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n> A collection of random typos in docs and comments spotted while working around\n> the tree.\n\n\nThanks, pushed!\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Tue, 31 Mar 2020 16:00:42 +0200", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Random set of typos spotted" } ]
[ { "msg_contents": "Hello,\n\nA colleague of mine reported an expected behavior.\n\nOn production cluster is in crash recovery, eg. after killing a backend, the\nWALs ready to be archived are removed before being archived.\n\nSee in attachment the reproduction script \"non-arch-wal-on-recovery.bash\".\n\nThis behavior has been introduced in 78ea8b5daab9237fd42d7a8a836c1c451765499f.\nFunction XLogArchiveCheckDone() badly consider the in crashed recovery\nproduction cluster as a standby without archive_mode=always. So the check\nconclude the WAL can be removed safely.\n\n bool inRecovery = RecoveryInProgress();\n \n /*\n * The file is always deletable if archive_mode is \"off\". On standbys\n * archiving is disabled if archive_mode is \"on\", and enabled with\n * \"always\". On a primary, archiving is enabled if archive_mode is \"on\"\n * or \"always\".\n */\n if (!((XLogArchivingActive() && !inRecovery) ||\n (XLogArchivingAlways() && inRecovery)))\n return true;\n\nPlease find in attachment a patch that fix this issue using the following test\ninstead:\n\n if (!((XLogArchivingActive() && !StandbyModeRequested) ||\n (XLogArchivingAlways() && inRecovery)))\n return true;\n\nI'm not sure if we should rely on StandbyModeRequested for the second part of\nthe test as well thought. What was the point to rely on RecoveryInProgress() to\nget the recovery status from shared mem?\n\nRegards,", "msg_date": "Tue, 31 Mar 2020 17:22:29 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "[BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/01 0:22, Jehan-Guillaume de Rorthais wrote:\n> Hello,\n> \n> A colleague of mine reported an expected behavior.\n> \n> On production cluster is in crash recovery, eg. after killing a backend, the\n> WALs ready to be archived are removed before being archived.\n> \n> See in attachment the reproduction script \"non-arch-wal-on-recovery.bash\".\n> \n> This behavior has been introduced in 78ea8b5daab9237fd42d7a8a836c1c451765499f.\n> Function XLogArchiveCheckDone() badly consider the in crashed recovery\n> production cluster as a standby without archive_mode=always. So the check\n> conclude the WAL can be removed safely.\n\nThanks for the report! Yeah, this seems a bug.\n\n> bool inRecovery = RecoveryInProgress();\n> \n> /*\n> * The file is always deletable if archive_mode is \"off\". On standbys\n> * archiving is disabled if archive_mode is \"on\", and enabled with\n> * \"always\". On a primary, archiving is enabled if archive_mode is \"on\"\n> * or \"always\".\n> */\n> if (!((XLogArchivingActive() && !inRecovery) ||\n> (XLogArchivingAlways() && inRecovery)))\n> return true;\n> \n> Please find in attachment a patch that fix this issue using the following test\n> instead:\n> \n> if (!((XLogArchivingActive() && !StandbyModeRequested) ||\n> (XLogArchivingAlways() && inRecovery)))\n> return true;\n> \n> I'm not sure if we should rely on StandbyModeRequested for the second part of\n> the test as well thought. What was the point to rely on RecoveryInProgress() to\n> get the recovery status from shared mem?\n\nSince StandbyModeRequested is the startup process-local variable,\nit basically cannot be used in XLogArchiveCheckDone() that other process\nmay call. So another approach would be necessary... One straight idea is to\nadd new shmem flag indicating whether we are in standby mode or not.\nAnother is to make the startup process remove .ready file if necessary.\n\nIf it's not easy to fix the issue, we might need to just revert the commit\nthat introduced the issue at first.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:27:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 1 Apr 2020 17:27:22 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n[...]\n> > bool inRecovery = RecoveryInProgress();\n> > \n> > /*\n> > * The file is always deletable if archive_mode is \"off\". On standbys\n> > * archiving is disabled if archive_mode is \"on\", and enabled with\n> > * \"always\". On a primary, archiving is enabled if archive_mode is \"on\"\n> > * or \"always\".\n> > */\n> > if (!((XLogArchivingActive() && !inRecovery) ||\n> > (XLogArchivingAlways() && inRecovery)))\n> > return true;\n> > \n> > Please find in attachment a patch that fix this issue using the following\n> > test instead:\n> > \n> > if (!((XLogArchivingActive() && !StandbyModeRequested) ||\n> > (XLogArchivingAlways() && inRecovery)))\n> > return true;\n> > \n> > I'm not sure if we should rely on StandbyModeRequested for the second part\n> > of the test as well thought. What was the point to rely on\n> > RecoveryInProgress() to get the recovery status from shared mem? \n> \n> Since StandbyModeRequested is the startup process-local variable,\n> it basically cannot be used in XLogArchiveCheckDone() that other process\n> may call.\n\nOk, you answered my wondering about using recovery status from shared mem. This\nwas obvious. Thanks for your help!\n\nI was wondering if we could use \"ControlFile->state != DB_IN_CRASH_RECOVERY\".\nIt seems fine during crash recovery as the control file is updated before the\ncheckpoint, but it doesn't feel right for other code paths where the control\nfile might not be up-to-date on filesystem, right ?\n\n> So another approach would be necessary... One straight idea is to\n> add new shmem flag indicating whether we are in standby mode or not.\n\nI was thinking about setting XLogCtlData->SharedRecoveryInProgress as an enum\nusing:\n\n enum RecoveryState\n {\n NOT_IN_RECOVERY = 0\n IN_CRASH_RECOVERY,\n IN_ARCHIVE_RECOVERY\n }\n\nPlease, find in attachment a patch implementing this.\n\nPlus, I added a second commit to add one test in regard with this bug.\n\n> Another is to make the startup process remove .ready file if necessary.\n\nI'm not sure to understand this one.\n\nRegards,", "msg_date": "Wed, 1 Apr 2020 18:17:35 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Hello.\n\nAt Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> Please, find in attachment a patch implementing this.\n\nThe patch partially reintroduces the issue the patch have\nfixed. Specifically a standby running a crash recovery wrongly marks a\nWAL file as \".ready\" if it is extant in pg_wal without accompanied by\n.ready file.\n\nPerhaps checking '.ready' before the checking for archive-mode would\nbe sufficient.\n\n> Plus, I added a second commit to add one test in regard with this bug.\n> \n> > Another is to make the startup process remove .ready file if necessary.\n> \n> I'm not sure to understand this one.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Apr 2020 13:04:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Sorry, it was quite ambiguous.\n\nAt Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> > Please, find in attachment a patch implementing this.\n> \n> The patch partially reintroduces the issue the patch have\n> fixed. Specifically a standby running a crash recovery wrongly marks a\n> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> .ready file.\n\nThe patch partially reintroduces the issue the commit 78ea8b5daa have\nfixed. Specifically a standby running a crash recovery wrongly marks a\nWAL file as \".ready\" if it is extant in pg_wal without accompanied by\n.ready file.\n\n> Perhaps checking '.ready' before the checking for archive-mode would\n> be sufficient.\n> \n> > Plus, I added a second commit to add one test in regard with this bug.\n> > \n> > > Another is to make the startup process remove .ready file if necessary.\n> > \n> > I'm not sure to understand this one.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Apr 2020 13:07:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/02 13:07, Kyotaro Horiguchi wrote:\n> Sorry, it was quite ambiguous.\n> \n> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in\n>>> Please, find in attachment a patch implementing this.\n>>\n>> The patch partially reintroduces the issue the patch have\n>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>> .ready file.\n> \n> The patch partially reintroduces the issue the commit 78ea8b5daa have\n> fixed. Specifically a standby running a crash recovery wrongly marks a\n> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> .ready file.\n\nOn second thought, I think that we should discuss what the desirable\nbehavior is before the implentation. Especially what's unclear to me\nis whether to remove such WAL files in archive recovery case with\narchive_mode=on. Those WAL files would be required when recovering\nfrom the backup taken before that archive recovery happens.\nSo it seems unsafe to remove them in that case.\n\nTherefore, IMO that the patch should change the code so that\nno unarchived WAL files are removed not only in crash recovery\nbut also archive recovery. Thought?\n\nOf course, this change would lead to the issue that the past unarchived\nWAL files keep remaining in the case of warm-standby using archive\nrecovery. But this issue looks unavoidable. If users want to avoid that,\narchive_mode should be set to always.\n\nAlso I'm a bit wondering if it's really safe to remove such unarchived\nWAL files even in the standby case with archive_mode=on. I would need\nmore time to think that.\n\n>> Perhaps checking '.ready' before the checking for archive-mode would\n>> be sufficient.\n>>\n>>> Plus, I added a second commit to add one test in regard with this bug.\n>>>\n>>>> Another is to make the startup process remove .ready file if necessary.\n>>>\n>>> I'm not sure to understand this one.\n\nI was thinking to make the startup process remove such unarchived WAL files\nif archive_mode=on and StandbyModeRequested/ArchiveRecoveryRequested\nis true.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 14:19:15 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Thu, 2 Apr 2020 14:19:15 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> On 2020/04/02 13:07, Kyotaro Horiguchi wrote:\n> > Sorry, it was quite ambiguous.\n> > At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n> > <horikyota.ntt@gmail.com> wrote in\n> >> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n> >> <jgdr@dalibo.com> wrote in\n> >>> Please, find in attachment a patch implementing this.\n> >>\n> >> The patch partially reintroduces the issue the patch have\n> >> fixed. Specifically a standby running a crash recovery wrongly marks a\n> >> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> >> .ready file.\n> > The patch partially reintroduces the issue the commit 78ea8b5daa have\n> > fixed. Specifically a standby running a crash recovery wrongly marks a\n> > WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> > .ready file.\n> \n> On second thought, I think that we should discuss what the desirable\n> behavior is before the implentation. Especially what's unclear to me\n\nAgreed.\n\n> is whether to remove such WAL files in archive recovery case with\n> archive_mode=on. Those WAL files would be required when recovering\n> from the backup taken before that archive recovery happens.\n> So it seems unsafe to remove them in that case.\n\nI'm not sure I'm getting the intention correctly, but I think it\nresponsibility of the operator to provide a complete set of archived\nWAL files for a backup. Could you elaborate what operation steps are\nyou assuming of?\n\n> Therefore, IMO that the patch should change the code so that\n> no unarchived WAL files are removed not only in crash recovery\n> but also archive recovery. Thought?\n\nAgreed if \"an unarchived WAL\" means \"a WAL file that is marked .ready\"\nand it should be archived immediately. My previous mail is written\nbased on the same thought.\n\nIn a very narrow window, if server crashed or killed after a segment\nis finished but before marking the file as .ready, the file doesn't\nhave .ready but should be archived. If we need to get rid of such a\nwindow, it would help to mark a WAL file as \".busy\" at creation time.\n\n> Of course, this change would lead to the issue that the past\n> unarchived\n> WAL files keep remaining in the case of warm-standby using archive\n> recovery. But this issue looks unavoidable. If users want to avoid\n> that,\n> archive_mode should be set to always.\n> \n> Also I'm a bit wondering if it's really safe to remove such unarchived\n> WAL files even in the standby case with archive_mode=on. I would need\n> more time to think that.\n> \n> >> Perhaps checking '.ready' before the checking for archive-mode would\n> >> be sufficient.\n> >>\n> >>> Plus, I added a second commit to add one test in regard with this bug.\n> >>>\n> >>>> Another is to make the startup process remove .ready file if\n> >>>> necessary.\n> >>>\n> >>> I'm not sure to understand this one.\n> \n> I was thinking to make the startup process remove such unarchived WAL\n> files\n> if archive_mode=on and StandbyModeRequested/ArchiveRecoveryRequested\n> is true.\n\nAs mentioned above, I don't understand the point of preserving WAL\nfiles that are either marked as .ready or not marked at all on a\nstandby with archive_mode=on.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 02 Apr 2020 16:23:23 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/02 16:23, Kyotaro Horiguchi wrote:\n> At Thu, 2 Apr 2020 14:19:15 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>> On 2020/04/02 13:07, Kyotaro Horiguchi wrote:\n>>> Sorry, it was quite ambiguous.\n>>> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n>>> <horikyota.ntt@gmail.com> wrote in\n>>>> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n>>>> <jgdr@dalibo.com> wrote in\n>>>>> Please, find in attachment a patch implementing this.\n>>>>\n>>>> The patch partially reintroduces the issue the patch have\n>>>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>>>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>>>> .ready file.\n>>> The patch partially reintroduces the issue the commit 78ea8b5daa have\n>>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>>> .ready file.\n>>\n>> On second thought, I think that we should discuss what the desirable\n>> behavior is before the implentation. Especially what's unclear to me\n> \n> Agreed.\n> \n>> is whether to remove such WAL files in archive recovery case with\n>> archive_mode=on. Those WAL files would be required when recovering\n>> from the backup taken before that archive recovery happens.\n>> So it seems unsafe to remove them in that case.\n> \n> I'm not sure I'm getting the intention correctly, but I think it\n> responsibility of the operator to provide a complete set of archived\n> WAL files for a backup. Could you elaborate what operation steps are\n> you assuming of?\n\nPlease imagine the case where you need to do archive recovery\nfrom the database snapshot taken while there are many WAL files\nwith .ready files. Those WAL files have not been archived yet.\nIn this case, ISTM those WAL files should not be removed until\nthey are archived, when archive_mode = on.\n\n>> Therefore, IMO that the patch should change the code so that\n>> no unarchived WAL files are removed not only in crash recovery\n>> but also archive recovery. Thought?\n> \n> Agreed if \"an unarchived WAL\" means \"a WAL file that is marked .ready\"\n> and it should be archived immediately. My previous mail is written\n> based on the same thought.\n\nOk, so our *current* consensus seems the followings. Right?\n\n- If archive_mode=off, any WAL files with .ready files are removed in\n crash recovery, archive recoery and standby mode.\n\n- If archive_mode=on, WAL files with .ready files are removed only in\n standby mode. In crash recovery and archive recovery cases, they keep\n remaining and would be archived after recovery finishes (i.e., during\n normal processing).\n\n- If archive_mode=always, in crash recovery, archive recovery and\n standby mode, WAL files with .ready files are archived if WAL archiver\n is running.\n\nThat is, WAL files with .ready files are removed when either\narchive_mode!=always in standby mode or archive_mode=off.\n\n> In a very narrow window, if server crashed or killed after a segment\n> is finished but before marking the file as .ready, the file doesn't\n> have .ready but should be archived. If we need to get rid of such a\n> window, it would help to mark a WAL file as \".busy\" at creation time.\n> \n>> Of course, this change would lead to the issue that the past\n>> unarchived\n>> WAL files keep remaining in the case of warm-standby using archive\n>> recovery. But this issue looks unavoidable. If users want to avoid\n>> that,\n>> archive_mode should be set to always.\n>>\n>> Also I'm a bit wondering if it's really safe to remove such unarchived\n>> WAL files even in the standby case with archive_mode=on. I would need\n>> more time to think that.\n>>\n>>>> Perhaps checking '.ready' before the checking for archive-mode would\n>>>> be sufficient.\n>>>>\n>>>>> Plus, I added a second commit to add one test in regard with this bug.\n>>>>>\n>>>>>> Another is to make the startup process remove .ready file if\n>>>>>> necessary.\n>>>>>\n>>>>> I'm not sure to understand this one.\n>>\n>> I was thinking to make the startup process remove such unarchived WAL\n>> files\n>> if archive_mode=on and StandbyModeRequested/ArchiveRecoveryRequested\n>> is true.\n> \n> As mentioned above, I don't understand the point of preserving WAL\n> files that are either marked as .ready or not marked at all on a\n> standby with archive_mode=on.\n\nMaybe yes. But I'm not confident about that there is no such case.\nAnyway I'm fine to fix the bug based on the above consensus at first.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 19:38:59 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 02 Apr 2020 13:07:34 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> Sorry, it was quite ambiguous.\n> \n> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote in \n> > At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote in \n> > > Please, find in attachment a patch implementing this. \n> > \n> > The patch partially reintroduces the issue the patch have\n> > fixed. Specifically a standby running a crash recovery wrongly marks a\n> > WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> > .ready file. \n> \n> The patch partially reintroduces the issue the commit 78ea8b5daa have\n> fixed. Specifically a standby running a crash recovery wrongly marks a\n> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> .ready file.\n\nAs far as I understand StartupXLOG(), NOT_IN_RECOVERY and IN_CRASH_RECOVERY are\nonly set for production clusters, not standby ones. So the following test\nshould never catch standby cluster as :\n\n (XLogArchivingActive() && inRecoveryState != IN_ARCHIVE_RECOVERY)\n\nForgive me if I'm wrong, but do I miss something?\n\nRegards,\n\n\n", "msg_date": "Thu, 2 Apr 2020 15:02:34 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 2 Apr 2020 19:38:59 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2020/04/02 16:23, Kyotaro Horiguchi wrote:\n> > At Thu, 2 Apr 2020 14:19:15 +0900, Fujii Masao\n> > <masao.fujii@oss.nttdata.com> wrote in \n[...]\n> >> is whether to remove such WAL files in archive recovery case with\n> >> archive_mode=on. Those WAL files would be required when recovering\n> >> from the backup taken before that archive recovery happens.\n> >> So it seems unsafe to remove them in that case. \n> > \n> > I'm not sure I'm getting the intention correctly, but I think it\n> > responsibility of the operator to provide a complete set of archived\n> > WAL files for a backup. Could you elaborate what operation steps are\n> > you assuming of? \n> \n> Please imagine the case where you need to do archive recovery\n> from the database snapshot taken while there are many WAL files\n> with .ready files. Those WAL files have not been archived yet.\n> In this case, ISTM those WAL files should not be removed until\n> they are archived, when archive_mode = on.\n\nIf you rely on snapshot without pg_start/stop_backup, I agree. Theses WAL\nshould be archived if:\n\n* archive_mode >= on for primary\n* archive_mode = always for standby\n\n> >> Therefore, IMO that the patch should change the code so that\n> >> no unarchived WAL files are removed not only in crash recovery\n> >> but also archive recovery. Thought? \n> > \n> > Agreed if \"an unarchived WAL\" means \"a WAL file that is marked .ready\"\n> > and it should be archived immediately. My previous mail is written\n> > based on the same thought. \n> \n> Ok, so our *current* consensus seems the followings. Right?\n> \n> - If archive_mode=off, any WAL files with .ready files are removed in\n> crash recovery, archive recoery and standby mode.\n\nyes\n\n> - If archive_mode=on, WAL files with .ready files are removed only in\n> standby mode. In crash recovery and archive recovery cases, they keep\n> remaining and would be archived after recovery finishes (i.e., during\n> normal processing).\n\nyes\n\n> - If archive_mode=always, in crash recovery, archive recovery and\n> standby mode, WAL files with .ready files are archived if WAL archiver\n> is running.\n\nyes\n\n> That is, WAL files with .ready files are removed when either\n> archive_mode!=always in standby mode or archive_mode=off.\n\nsounds fine to me.\n\n[...]\n> >>>>>> Another is to make the startup process remove .ready file if\n> >>>>>> necessary. \n> >>>>>\n> >>>>> I'm not sure to understand this one. \n> >>\n> >> I was thinking to make the startup process remove such unarchived WAL\n> >> files\n> >> if archive_mode=on and StandbyModeRequested/ArchiveRecoveryRequested\n> >> is true.\n\nOk, understood.\n\n> > As mentioned above, I don't understand the point of preserving WAL\n> > files that are either marked as .ready or not marked at all on a\n> > standby with archive_mode=on. \n> \n> Maybe yes. But I'm not confident about that there is no such case.\n\nWell, it seems to me that this is what you suggested few paragraph away:\n\n «.ready files are removed when either archive_mode!=always in standby mode»\n\n\n\n", "msg_date": "Thu, 2 Apr 2020 15:49:15 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/02 22:49, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 2 Apr 2020 19:38:59 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> On 2020/04/02 16:23, Kyotaro Horiguchi wrote:\n>>> At Thu, 2 Apr 2020 14:19:15 +0900, Fujii Masao\n>>> <masao.fujii@oss.nttdata.com> wrote in\n> [...]\n>>>> is whether to remove such WAL files in archive recovery case with\n>>>> archive_mode=on. Those WAL files would be required when recovering\n>>>> from the backup taken before that archive recovery happens.\n>>>> So it seems unsafe to remove them in that case.\n>>>\n>>> I'm not sure I'm getting the intention correctly, but I think it\n>>> responsibility of the operator to provide a complete set of archived\n>>> WAL files for a backup. Could you elaborate what operation steps are\n>>> you assuming of?\n>>\n>> Please imagine the case where you need to do archive recovery\n>> from the database snapshot taken while there are many WAL files\n>> with .ready files. Those WAL files have not been archived yet.\n>> In this case, ISTM those WAL files should not be removed until\n>> they are archived, when archive_mode = on.\n> \n> If you rely on snapshot without pg_start/stop_backup, I agree. Theses WAL\n> should be archived if:\n> \n> * archive_mode >= on for primary\n> * archive_mode = always for standby\n> \n>>>> Therefore, IMO that the patch should change the code so that\n>>>> no unarchived WAL files are removed not only in crash recovery\n>>>> but also archive recovery. Thought?\n>>>\n>>> Agreed if \"an unarchived WAL\" means \"a WAL file that is marked .ready\"\n>>> and it should be archived immediately. My previous mail is written\n>>> based on the same thought.\n>>\n>> Ok, so our *current* consensus seems the followings. Right?\n>>\n>> - If archive_mode=off, any WAL files with .ready files are removed in\n>> crash recovery, archive recoery and standby mode.\n> \n> yes\n> \n>> - If archive_mode=on, WAL files with .ready files are removed only in\n>> standby mode. In crash recovery and archive recovery cases, they keep\n>> remaining and would be archived after recovery finishes (i.e., during\n>> normal processing).\n> \n> yes\n> \n>> - If archive_mode=always, in crash recovery, archive recovery and\n>> standby mode, WAL files with .ready files are archived if WAL archiver\n>> is running.\n> \n> yes\n> \n>> That is, WAL files with .ready files are removed when either\n>> archive_mode!=always in standby mode or archive_mode=off.\n> \n> sounds fine to me.\n> \n> [...]\n>>>>>>>> Another is to make the startup process remove .ready file if\n>>>>>>>> necessary.\n>>>>>>>\n>>>>>>> I'm not sure to understand this one.\n>>>>\n>>>> I was thinking to make the startup process remove such unarchived WAL\n>>>> files\n>>>> if archive_mode=on and StandbyModeRequested/ArchiveRecoveryRequested\n>>>> is true.\n> \n> Ok, understood.\n> \n>>> As mentioned above, I don't understand the point of preserving WAL\n>>> files that are either marked as .ready or not marked at all on a\n>>> standby with archive_mode=on.\n>>\n>> Maybe yes. But I'm not confident about that there is no such case.\n> \n> Well, it seems to me that this is what you suggested few paragraph away:\n> \n> «.ready files are removed when either archive_mode!=always in standby mode»\n\nYes, so I'm fine with that as the first consensus because the behavior\nis obviously better than the current one. *If* the case where no WAL files\nshould be removed is found, I'd just like to propose the additional patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 23:55:46 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/02 22:02, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 02 Apr 2020 13:07:34 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n>> Sorry, it was quite ambiguous.\n>>\n>> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote in\n>>> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n>>> <jgdr@dalibo.com> wrote in\n>>>> Please, find in attachment a patch implementing this.\n>>>\n>>> The patch partially reintroduces the issue the patch have\n>>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>>> .ready file.\n>>\n>> The patch partially reintroduces the issue the commit 78ea8b5daa have\n>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>> .ready file.\n> \n> As far as I understand StartupXLOG(), NOT_IN_RECOVERY and IN_CRASH_RECOVERY are\n> only set for production clusters, not standby ones.\n\nDB_IN_CRASH_RECOVERY can be set even in standby mode. For example,\nif you start the standby from the cold backup of the primary,\nsince InArchiveRecovery is false at the beginning of the recovery,\nDB_IN_CRASH_RECOVERY is set in that moment. But then after all the valid\nWAL in pg_wal have been replayed, InArchiveRecovery is set to true and\nDB_IN_ARCHIVE_RECOVERY is set.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 2 Apr 2020 23:58:00 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 2 Apr 2020 23:58:00 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2020/04/02 22:02, Jehan-Guillaume de Rorthais wrote:\n> > On Thu, 02 Apr 2020 13:07:34 +0900 (JST)\n> > Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> > \n> >> Sorry, it was quite ambiguous.\n> >>\n> >> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n> >> <horikyota.ntt@gmail.com> wrote in \n> >>> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n> >>> <jgdr@dalibo.com> wrote in \n> >>>> Please, find in attachment a patch implementing this. \n> >>>\n> >>> The patch partially reintroduces the issue the patch have\n> >>> fixed. Specifically a standby running a crash recovery wrongly marks a\n> >>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> >>> .ready file. \n> >>\n> >> The patch partially reintroduces the issue the commit 78ea8b5daa have\n> >> fixed. Specifically a standby running a crash recovery wrongly marks a\n> >> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n> >> .ready file. \n> > \n> > As far as I understand StartupXLOG(), NOT_IN_RECOVERY and IN_CRASH_RECOVERY\n> > are only set for production clusters, not standby ones. \n> \n> DB_IN_CRASH_RECOVERY can be set even in standby mode. For example,\n> if you start the standby from the cold backup of the primary,\n\nIn cold backup? Then ControlFile->state == DB_SHUTDOWNED, right?\n\nUnless I'm wrong, this should be catched by:\n\n if (ArchiveRecoveryRequested && ( [...] ||\n\t ControlFile->state == DB_SHUTDOWNED))\n {\n\tInArchiveRecovery = true;\n\tif (StandbyModeRequested)\n\t\tStandbyMode = true;\n }\n\nWith InArchiveRecovery=true, we later set DB_IN_ARCHIVE_RECOVERY instead of\nDB_IN_CRASH_RECOVERY.\n\n\n> since InArchiveRecovery is false at the beginning of the recovery,\n> DB_IN_CRASH_RECOVERY is set in that moment. But then after all the valid\n> WAL in pg_wal have been replayed, InArchiveRecovery is set to true and\n> DB_IN_ARCHIVE_RECOVERY is set.\n\nHowever, I suppose this is true if you restore a backup from a snapshot\nwithout backup_label, right?\n\n\n\n", "msg_date": "Thu, 2 Apr 2020 17:37:44 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 2 Apr 2020 23:55:46 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n[...]\n> > Well, it seems to me that this is what you suggested few paragraph away:\n> > \n> > «.ready files are removed when either archive_mode!=always in standby\n> > mode» \n> \n> Yes, so I'm fine with that as the first consensus because the behavior\n> is obviously better than the current one. *If* the case where no WAL files\n> should be removed is found, I'd just like to propose the additional patch.\n\nDo you mean to want to produce the next patch yourself?\n\n\n", "msg_date": "Thu, 2 Apr 2020 17:44:50 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Thu, 2 Apr 2020 17:44:50 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Thu, 2 Apr 2020 23:55:46 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n> [...]\n> > > Well, it seems to me that this is what you suggested few paragraph away:\n> > > \n> > > «.ready files are removed when either archive_mode!=always in standby\n> > > mode» \n> > \n> > Yes, so I'm fine with that as the first consensus because the behavior\n> > is obviously better than the current one. *If* the case where no WAL files\n> > should be removed is found, I'd just like to propose the additional patch.\n> \n> Do you mean to want to produce the next patch yourself?\n\nNo. Fujii-san is saying that he will address it, if the fix made in\nthis thread is found to be imperfect later.\n\nHe suspects that WAL files should be preserved at least in certain\ncases even if the file persists forever, but the consensus here is to\nremove files under certain conditions so as not to no WAL file\npersists in pg_wal directory.\n\nFeel free to propose the next version!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 03 Apr 2020 10:14:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/03 0:37, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 2 Apr 2020 23:58:00 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> On 2020/04/02 22:02, Jehan-Guillaume de Rorthais wrote:\n>>> On Thu, 02 Apr 2020 13:07:34 +0900 (JST)\n>>> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n>>> \n>>>> Sorry, it was quite ambiguous.\n>>>>\n>>>> At Thu, 02 Apr 2020 13:04:43 +0900 (JST), Kyotaro Horiguchi\n>>>> <horikyota.ntt@gmail.com> wrote in\n>>>>> At Wed, 1 Apr 2020 18:17:35 +0200, Jehan-Guillaume de Rorthais\n>>>>> <jgdr@dalibo.com> wrote in\n>>>>>> Please, find in attachment a patch implementing this.\n>>>>>\n>>>>> The patch partially reintroduces the issue the patch have\n>>>>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>>>>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>>>>> .ready file.\n>>>>\n>>>> The patch partially reintroduces the issue the commit 78ea8b5daa have\n>>>> fixed. Specifically a standby running a crash recovery wrongly marks a\n>>>> WAL file as \".ready\" if it is extant in pg_wal without accompanied by\n>>>> .ready file.\n>>>\n>>> As far as I understand StartupXLOG(), NOT_IN_RECOVERY and IN_CRASH_RECOVERY\n>>> are only set for production clusters, not standby ones.\n>>\n>> DB_IN_CRASH_RECOVERY can be set even in standby mode. For example,\n>> if you start the standby from the cold backup of the primary,\n> \n> In cold backup? Then ControlFile->state == DB_SHUTDOWNED, right?\n> \n> Unless I'm wrong, this should be catched by:\n> \n> if (ArchiveRecoveryRequested && ( [...] ||\n> \t ControlFile->state == DB_SHUTDOWNED))\n> {\n> \tInArchiveRecovery = true;\n> \tif (StandbyModeRequested)\n> \t\tStandbyMode = true;\n> }\n> \n> With InArchiveRecovery=true, we later set DB_IN_ARCHIVE_RECOVERY instead of\n> DB_IN_CRASH_RECOVERY.\n\nYes, you're right. So I had to mention one more condition in my\nprevious email. The condition is that the cold backup was taken from\nthe server that was shutdowned with immdiate mode. In this case,\nthe code block that you pointed is skipped and InArchiveRecovery is\nnot set to true there.\n\n>> since InArchiveRecovery is false at the beginning of the recovery,\n>> DB_IN_CRASH_RECOVERY is set in that moment. But then after all the valid\n>> WAL in pg_wal have been replayed, InArchiveRecovery is set to true and\n>> DB_IN_ARCHIVE_RECOVERY is set.\n> \n> However, I suppose this is true if you restore a backup from a snapshot\n> without backup_label, right?\n\nMaybe yes.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Apr 2020 15:44:40 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/03 10:14, Kyotaro Horiguchi wrote:\n> At Thu, 2 Apr 2020 17:44:50 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in\n>> On Thu, 2 Apr 2020 23:55:46 +0900\n>> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> [...]\n>>>> Well, it seems to me that this is what you suggested few paragraph away:\n>>>>\n>>>> «.ready files are removed when either archive_mode!=always in standby\n>>>> mode»\n>>>\n>>> Yes, so I'm fine with that as the first consensus because the behavior\n>>> is obviously better than the current one. *If* the case where no WAL files\n>>> should be removed is found, I'd just like to propose the additional patch.\n>>\n>> Do you mean to want to produce the next patch yourself?\n> \n> No. Fujii-san is saying that he will address it, if the fix made in\n> this thread is found to be imperfect later.\n> \n> He suspects that WAL files should be preserved at least in certain\n> cases even if the file persists forever, but the consensus here is to\n> remove files under certain conditions so as not to no WAL file\n> persists in pg_wal directory.\n\nYes.\n\n> Feel free to propose the next version!\n\nYes!\n\nBTW, now I'm thinking that the flag in shmem should be updated when\nthe startup process sets StandbyModeRequested to true at the beginning\nof the recovery. That is,\n\n- Add something like SharedStandbyModeRequested into XLogCtl. This field\n should be initialized with false;\n- Set XLogCtl->SharedStandbyModeRequested to true when the startup\n process detects the standby.signal file and sets the local variable\n StandbyModeRequested to true.\n- Make XLogArchiveCheckDone() use XLogCtl->SharedStandbyModeRequested\n to know whether the server is in standby mode or not.\n\nThought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 3 Apr 2020 15:45:31 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 2 Apr 2020 23:55:46 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2020/04/02 22:49, Jehan-Guillaume de Rorthais wrote:\n>> On Thu, 2 Apr 2020 19:38:59 +0900\n>> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n[...]\n>>> That is, WAL files with .ready files are removed when either\n>>> archive_mode!=always in standby mode or archive_mode=off. \n>> \n>> sounds fine to me.\n\nTo some extends, it appears to me this sentence was relatively close to my\nprevious patch, as far as you exclude IN_CRASH_RECOVERY from the shortcut:\n\nXLogArchiveCheckDone(const char *xlog)\n{\n [...]\n if ( (inRecoveryState != IN_CRASH_RECOVERY) && (\n (inRecoveryState == NOT_IN_RECOVERY && !XLogArchivingActive()) ||\n (inRecoveryState == IN_ARCHIVE_RECOVERY && !XLogArchivingAlways())))\n return true;\n\nWhich means that only .done cleanup would occurs during CRASH_RECOVERY\nand .ready files might be created if no .done exists. No matter the futur status\nof the cluster: primary or standby. Normal shortcut will apply during first\ncheckpoint after the crash recovery step.\n\nThis should handle the case where a backup without backup_label (taken from a\nsnapshot or after a shutdown with immediate) is restored to build a standby.\n\nPlease, find in attachment a third version of my patch\n\"0001-v3-Fix-WAL-retention-during-production-crash-recovery.patch\".\n\n\"0002-v1-Add-test-on-non-archived-WAL-during-crash-recovery.patch\" is left\nuntouched. But I'm considering adding some more tests relative to this\ndiscussion.\n\n> BTW, now I'm thinking that the flag in shmem should be updated when\n> the startup process sets StandbyModeRequested to true at the beginning\n> of the recovery. That is,\n> \n> - Add something like SharedStandbyModeRequested into XLogCtl. This field\n> should be initialized with false;\n> - Set XLogCtl->SharedStandbyModeRequested to true when the startup\n> process detects the standby.signal file and sets the local variable\n> StandbyModeRequested to true.\n> - Make XLogArchiveCheckDone() use XLogCtl->SharedStandbyModeRequested\n> to know whether the server is in standby mode or not.\n> \n> Thought?\n\nI try to avoid a new flag in memory with the proposal in attachment of this\nemail. It seems to me various combinaisons of booleans with subtle differences\naround the same subject makes it a bit trappy and complicated to understand.\n\nIf my proposal is rejected, I'll be happy to volunteer to add\nXLogCtl->SharedStandbyModeRequested though. It seems like a simple enough fix\nas well.\n\nRegards,", "msg_date": "Fri, 3 Apr 2020 18:26:25 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "\n\nOn 2020/04/04 1:26, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 2 Apr 2020 23:55:46 +0900\n> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> \n>> On 2020/04/02 22:49, Jehan-Guillaume de Rorthais wrote:\n>>> On Thu, 2 Apr 2020 19:38:59 +0900\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> [...]\n>>>> That is, WAL files with .ready files are removed when either\n>>>> archive_mode!=always in standby mode or archive_mode=off.\n>>>\n>>> sounds fine to me.\n> \n> To some extends, it appears to me this sentence was relatively close to my\n> previous patch, as far as you exclude IN_CRASH_RECOVERY from the shortcut:\n> \n> XLogArchiveCheckDone(const char *xlog)\n> {\n> [...]\n> if ( (inRecoveryState != IN_CRASH_RECOVERY) && (\n> (inRecoveryState == NOT_IN_RECOVERY && !XLogArchivingActive()) ||\n> (inRecoveryState == IN_ARCHIVE_RECOVERY && !XLogArchivingAlways())))\n> return true;\n> \n> Which means that only .done cleanup would occurs during CRASH_RECOVERY\n> and .ready files might be created if no .done exists. No matter the futur status\n> of the cluster: primary or standby. Normal shortcut will apply during first\n> checkpoint after the crash recovery step.\n> \n> This should handle the case where a backup without backup_label (taken from a\n> snapshot or after a shutdown with immediate) is restored to build a standby.\n> \n> Please, find in attachment a third version of my patch\n> \"0001-v3-Fix-WAL-retention-during-production-crash-recovery.patch\".\n\nThanks for updating the patch! Here are my review comments:\n\n-\tbool\t\tSharedRecoveryInProgress;\n+\tRecoveryState\tSharedRecoveryInProgress;\n\nSince the patch changes the meaning of this variable, the name of\nthe variable should be changed? Otherwise, the current name seems\nconfusing.\n\n+\t\t\tSpinLockAcquire(&XLogCtl->info_lck);\n+\t\t\tXLogCtl->SharedRecoveryInProgress = IN_CRASH_RECOVERY;\n+\t\t\tSpinLockRelease(&XLogCtl->info_lck);\n\nAs I explained upthread, this code can be reached and IN_CRASH_RECOVERY\ncan be set even in standby or archive recovery. Is this right behavior that\nyou're expecting?\n\nEven in crash recovery case, GetRecoveryState() returns IN_ARCHIVE_RECOVERY\nuntil this code is reached. Also when WAL replay is not necessary\n(e.g., restart of the server shutdowed cleanly before), GetRecoveryState()\nreturns IN_ARCHIVE_RECOVERY because this code is not reached. Aren't\nthese fragile? If XLogArchiveCheckDone() is only user of GetRecoveryState(),\nthey would be ok. But if another user will appear in the future, it seems\nvery easy to mistake. At least those behaviors should be commented in\nGetRecoveryState().\n\n-\tif (!((XLogArchivingActive() && !inRecovery) ||\n-\t\t (XLogArchivingAlways() && inRecovery)))\n+\tif ( (inRecoveryState != IN_CRASH_RECOVERY) && (\n+\t\t (inRecoveryState == NOT_IN_RECOVERY && !XLogArchivingActive()) &&\n+\t\t (inRecoveryState == IN_ARCHIVE_RECOVERY && !XLogArchivingAlways())))\n\t\treturn true;\n\nThe last condition seems to cause XLogArchiveCheckDone() to return\ntrue in archive recovery mode with archive_mode=on, then cause\nunarchived WAL files with .ready to be removed. Is my understanding right?\nIf yes, that behavior doesn't seem to match with our consensus, i.e.,\nWAL files with .ready should not be removed in that case.\n\n+/* Recovery state */\n+typedef enum RecoveryState\n+{\n+\tNOT_IN_RECOVERY = 0,\n+\tIN_CRASH_RECOVERY,\n+\tIN_ARCHIVE_RECOVERY\n+} RecoveryState;\n\nIsn't it better to add more comments here? For example, what does\n\"Recovery state\" mean? Which state is used in standby mode? Why? ,etc.\n\nIs it really ok not to have the value indicating standby mode?\n\nThese enum values names are confusing because the variables with\nsimilar names already exist. For example, IN_CRASH_RECOVERY vs.\nDB_IN_CRASH_RECOVERY. So IMO it seems better to rename them,\n.e.g., by adding the prefix.\n\n> \n> \"0002-v1-Add-test-on-non-archived-WAL-during-crash-recovery.patch\" is left\n> untouched. But I'm considering adding some more tests relative to this\n> discussion.\n> \n>> BTW, now I'm thinking that the flag in shmem should be updated when\n>> the startup process sets StandbyModeRequested to true at the beginning\n>> of the recovery. That is,\n>>\n>> - Add something like SharedStandbyModeRequested into XLogCtl. This field\n>> should be initialized with false;\n>> - Set XLogCtl->SharedStandbyModeRequested to true when the startup\n>> process detects the standby.signal file and sets the local variable\n>> StandbyModeRequested to true.\n>> - Make XLogArchiveCheckDone() use XLogCtl->SharedStandbyModeRequested\n>> to know whether the server is in standby mode or not.\n>>\n>> Thought?\n> \n> I try to avoid a new flag in memory with the proposal in attachment of this\n> email. It seems to me various combinaisons of booleans with subtle differences\n> around the same subject makes it a bit trappy and complicated to understand.\n\nOk, so firstly I try to review your patch!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 4 Apr 2020 02:49:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Sat, 4 Apr 2020 02:49:50 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> On 2020/04/04 1:26, Jehan-Guillaume de Rorthais wrote:\n> > On Thu, 2 Apr 2020 23:55:46 +0900\n> > Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> > \n> >> On 2020/04/02 22:49, Jehan-Guillaume de Rorthais wrote: \n> >>> On Thu, 2 Apr 2020 19:38:59 +0900\n> >>> Fujii Masao <masao.fujii@oss.nttdata.com> wrote: \n> > [...] \n> >>>> That is, WAL files with .ready files are removed when either\n> >>>> archive_mode!=always in standby mode or archive_mode=off. \n> >>>\n> >>> sounds fine to me. \n> > \n> > To some extends, it appears to me this sentence was relatively close to my\n> > previous patch, as far as you exclude IN_CRASH_RECOVERY from the shortcut:\n> > \n> > XLogArchiveCheckDone(const char *xlog)\n> > {\n> > [...]\n> > if ( (inRecoveryState != IN_CRASH_RECOVERY) && (\n> > (inRecoveryState == NOT_IN_RECOVERY && !XLogArchivingActive()) ||\n> > (inRecoveryState == IN_ARCHIVE_RECOVERY\n> > && !XLogArchivingAlways()))) return true;\n> > \n> > Which means that only .done cleanup would occurs during CRASH_RECOVERY\n> > and .ready files might be created if no .done exists. No matter the futur\n> > status of the cluster: primary or standby. Normal shortcut will apply\n> > during first checkpoint after the crash recovery step.\n> > \n> > This should handle the case where a backup without backup_label (taken from\n> > a snapshot or after a shutdown with immediate) is restored to build a\n> > standby.\n> > \n> > Please, find in attachment a third version of my patch\n> > \"0001-v3-Fix-WAL-retention-during-production-crash-recovery.patch\". \n> \n> Thanks for updating the patch! Here are my review comments:\n> \n> -\tbool\t\tSharedRecoveryInProgress;\n> +\tRecoveryState\tSharedRecoveryInProgress;\n> \n> Since the patch changes the meaning of this variable, the name of\n> the variable should be changed? Otherwise, the current name seems\n> confusing.\n\nIndeed, fixed using SharedRecoveryState\n\n> +\t\t\tSpinLockAcquire(&XLogCtl->info_lck);\n> +\t\t\tXLogCtl->SharedRecoveryInProgress =\n> IN_CRASH_RECOVERY;\n> +\t\t\tSpinLockRelease(&XLogCtl->info_lck);\n> \n> As I explained upthread, this code can be reached and IN_CRASH_RECOVERY\n> can be set even in standby or archive recovery. Is this right behavior that\n> you're expecting?\n\nYes. This patch avoids archive cleanup during crash recovery altogether,\nwhatever the requested status for the cluster.\n\n> Even in crash recovery case, GetRecoveryState() returns IN_ARCHIVE_RECOVERY\n> until this code is reached.\n\nI tried to stick as close as possible with \"ControlFile->state\" and old\nXLogCtl->SharedRecoveryInProgress variables. That's why it's initialized as\nIN_ARCHIVE_RECOVERY as XLogCtl->SharedRecoveryInProgress was init as true as\nwell.\n\nThe status itself is set during StartupXLOG when the historical code actually\ntries to define and record the real state between DB_IN_ARCHIVE_RECOVERY and\nDB_IN_CRASH_RECOVERY.\n\n> Also when WAL replay is not necessary (e.g., restart of the server shutdowed\n> cleanly before), GetRecoveryState() returns IN_ARCHIVE_RECOVERY because this\n> code is not reached.\n\nIt is set to NOT_IN_RECOVERY at the end of StartupXLOG, in the same place we\nset ControlFile->state = DB_IN_PRODUCTION. So GetRecoveryState() returns\nNOT_IN_RECOVERY as soon as StartupXLOG is done when no WAL replay is necessary.\n\n> Aren't these fragile? If XLogArchiveCheckDone() is only user of\n> GetRecoveryState(), they would be ok. But if another user will appear in the\n> future, it seems very easy to mistake. At least those behaviors should be\n> commented in GetRecoveryState().\n\nWe certainly can set SharedRecoveryState earlier, in XLOGShmemInit, based on\nthe ControlFile->state value. In my understanding, anything different than\nDB_SHUTDOWNED or DB_SHUTDOWNED_IN_RECOVERY can be considered as a crash\nrecovery. Based on this XLOGShmemInit can init SharedRecoveryState to\nIN_ARCHIVE_RECOVERY or IN_CRASH_RECOVERY.\n\nWith either values, RecoveryInProgress() keep returning the same result so any\ncurrent code relying on RecoveryInProgress() is compatible.\n\nI'm not sure who would need this information before the WAL machinery is up,\nbut is it safe enough in your opinion for futur usage of GetRecoveryState()? Do\nyou think this information might be useful before the WAL machinery is set?\nCurrent \"user\" (eg. restoreTwoPhaseData()) only need to know if we are in\nrecovery, whatever the reason.\n\nThe patch in attachment set SharedRecoveryState to either IN_ARCHIVE_RECOVERY\nor IN_CRASH_RECOVERY from XLOGShmemInit based on the ControlFile state. It\nfeels strange though to set this so far away from ControlFile->state\n= DB_IN_CRASH_RECOVERY...\n\n> -\tif (!((XLogArchivingActive() && !inRecovery) ||\n> -\t\t (XLogArchivingAlways() && inRecovery)))\n> +\tif ( (inRecoveryState != IN_CRASH_RECOVERY) && (\n> +\t\t (inRecoveryState == NOT_IN_RECOVERY\n> && !XLogArchivingActive()) &&\n> +\t\t (inRecoveryState == IN_ARCHIVE_RECOVERY\n> && !XLogArchivingAlways()))) return true;\n> \n> The last condition seems to cause XLogArchiveCheckDone() to return\n> true in archive recovery mode with archive_mode=on, then cause\n> unarchived WAL files with .ready to be removed. Is my understanding right?\n> If yes, that behavior doesn't seem to match with our consensus, i.e.,\n> WAL files with .ready should not be removed in that case.\n\nWe wrote:\n\n >> That is, WAL files with .ready files are removed when either\n >> archive_mode!=always in standby mode or archive_mode=off. \n > \n > sounds fine to me.\n\nSo if in standby mode and archive_mode is not \"always\", the .ready files\nshould be removed.\n\nIn current patch, I split the conditions in the sake of clarity.\n\n> +/* Recovery state */\n> +typedef enum RecoveryState\n> +{\n> +\tNOT_IN_RECOVERY = 0,\n> +\tIN_CRASH_RECOVERY,\n> +\tIN_ARCHIVE_RECOVERY\n> +} RecoveryState;\n> \n> Isn't it better to add more comments here? For example, what does\n> \"Recovery state\" mean? Which state is used in standby mode? Why? ,etc.\n\nExplanations added.\n\n> Is it really ok not to have the value indicating standby mode?\n\nUnless I'm wrong, this shared state only indicates why we are recovering WAL,\nit does not need reflect the status of the instance in shared memory.\nStandbyMode is already available for the startup process. Would it be useful\nto share this mode outside of the startup process?\n \n> These enum values names are confusing because the variables with\n> similar names already exist. For example, IN_CRASH_RECOVERY vs.\n> DB_IN_CRASH_RECOVERY. So IMO it seems better to rename them,\n> .e.g., by adding the prefix.\n\nWell, I lack of imagination. So I picked CRASH_RECOVERING and\nARCHIVE_RECOVERING.\n\n> > \"0002-v1-Add-test-on-non-archived-WAL-during-crash-recovery.patch\" is left\n> > untouched. But I'm considering adding some more tests relative to this\n> > discussion.\n> > \n> >> BTW, now I'm thinking that the flag in shmem should be updated when\n> >> the startup process sets StandbyModeRequested to true at the beginning\n> >> of the recovery. That is,\n> >>\n> >> - Add something like SharedStandbyModeRequested into XLogCtl. This field\n> >> should be initialized with false;\n> >> - Set XLogCtl->SharedStandbyModeRequested to true when the startup\n> >> process detects the standby.signal file and sets the local variable\n> >> StandbyModeRequested to true.\n> >> - Make XLogArchiveCheckDone() use XLogCtl->SharedStandbyModeRequested\n> >> to know whether the server is in standby mode or not.\n> >>\n> >> Thought? \n> > \n> > I try to avoid a new flag in memory with the proposal in attachment of this\n> > email. It seems to me various combinaisons of booleans with subtle\n> > differences around the same subject makes it a bit trappy and complicated\n> > to understand. \n> \n> Ok, so firstly I try to review your patch!\n\nThank you for your review and help!\n\nIn attachment:\n* fix proposal: 0001-v4-Fix-WAL-retention-during-production-crash-recovery.patch\n* add test: 0002-v2-Add-test-on-non-archived-WAL-during-crash-recovery.patch\n 0002-v2 fix conflict with current master.\n\nRegards,", "msg_date": "Tue, 7 Apr 2020 17:17:36 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, Apr 07, 2020 at 05:17:36PM +0200, Jehan-Guillaume de Rorthais wrote:\n> I'm not sure who would need this information before the WAL machinery is up,\n> but is it safe enough in your opinion for futur usage of GetRecoveryState()? Do\n> you think this information might be useful before the WAL machinery is set?\n> Current \"user\" (eg. restoreTwoPhaseData()) only need to know if we are in\n> recovery, whatever the reason.\n\n(I had this thread marked as something to look at, but could not.\nSorry for the delay).\n\n> src/test/recovery/t/011_crash_recovery.pl | 16 ++++++++++++++--\n> 1 file changed, 14 insertions(+), 2 deletions(-)\n> \n> diff --git a/src/test/recovery/t/011_crash_recovery.pl b/src/test/recovery/t/011_crash_recovery.pl\n> index ca6e92b50d..ce2e899891 100644\n> --- a/src/test/recovery/t/011_crash_recovery.pl\n> +++ b/src/test/recovery/t/011_crash_recovery.pl\n> @@ -15,11 +15,17 @@ if ($Config{osname} eq 'MSWin32')\n\nMay I ask why this new test is added to 011_crash_recovery.pl which is\naimed at testing crash and redo, while we have 002_archiving.pl that\nis dedicated to archiving in a more general manner?\n--\nMichael", "msg_date": "Wed, 8 Apr 2020 11:23:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Tue, 7 Apr 2020 17:17:36 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> > +/* Recovery state */\n> > +typedef enum RecoveryState\n> > +{\n> > +\tNOT_IN_RECOVERY = 0,\n> > +\tIN_CRASH_RECOVERY,\n> > +\tIN_ARCHIVE_RECOVERY\n> > +} RecoveryState;\n\nI'm not sure the complexity is required here. Are we asuume that\narchive_mode can be changed before restarting?\n\nAt Thu, 2 Apr 2020 15:49:15 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> > Ok, so our *current* consensus seems the followings. Right?\n> > \n> > - If archive_mode=off, any WAL files with .ready files are removed in\n> > crash recovery, archive recoery and standby mode.\n> \n> yes\n\nIf archive_mode = off no WAL files are marked as \".ready\".\n\n> > - If archive_mode=on, WAL files with .ready files are removed only in\n> > standby mode. In crash recovery and archive recovery cases, they keep\n> > remaining and would be archived after recovery finishes (i.e., during\n> > normal processing).\n> \n> yes\n>\n> > - If archive_mode=always, in crash recovery, archive recovery and\n> > standby mode, WAL files with .ready files are archived if WAL archiver\n> > is running.\n> \n> yes\n\nSo if we assume archive_mode won't be changed after a crash before\nrestarting, if archive_mode = on on a standy, WAL files are not marked\nas \".ready\". If it is \"always\", WAL files that are to be archived are\nmarked as \".ready\". Finally, the condition reduces to:\n\nIf archiver is running, archive \".ready\" files. Otherwise ignore\n\".ready\" and just remove WAL files after use.\n> \n> > That is, WAL files with .ready files are removed when either\n> > archive_mode!=always in standby mode or archive_mode=off.\n> \n> sounds fine to me.\n\nThat situation implies that archive_mode has been changed. Can we\nguarantee the completeness of the archive in the case?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 08 Apr 2020 17:39:09 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 8 Apr 2020 11:23:45 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Apr 07, 2020 at 05:17:36PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > I'm not sure who would need this information before the WAL machinery is up,\n> > but is it safe enough in your opinion for futur usage of\n> > GetRecoveryState()? Do you think this information might be useful before\n> > the WAL machinery is set? Current \"user\" (eg. restoreTwoPhaseData()) only\n> > need to know if we are in recovery, whatever the reason. \n> \n> (I had this thread marked as something to look at, but could not.\n> Sorry for the delay).\n\n(no worries :))\n\n> > src/test/recovery/t/011_crash_recovery.pl | 16 ++++++++++++++--\n> > 1 file changed, 14 insertions(+), 2 deletions(-)\n> > \n> > diff --git a/src/test/recovery/t/011_crash_recovery.pl\n> > b/src/test/recovery/t/011_crash_recovery.pl index ca6e92b50d..ce2e899891\n> > 100644 --- a/src/test/recovery/t/011_crash_recovery.pl\n> > +++ b/src/test/recovery/t/011_crash_recovery.pl\n> > @@ -15,11 +15,17 @@ if ($Config{osname} eq 'MSWin32') \n> \n> May I ask why this new test is added to 011_crash_recovery.pl which is\n> aimed at testing crash and redo, while we have 002_archiving.pl that\n> is dedicated to archiving in a more general manner?\n\nI thought it was a better place because the test happen during crash recovery.\n\nIn the meantime, while working on other tests related to $SUBJECT and the\ncurrent consensus, I was wondering if a new file would be a better place anyway.\n\n\n", "msg_date": "Wed, 8 Apr 2020 13:58:30 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 08 Apr 2020 17:39:09 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Tue, 7 Apr 2020 17:17:36 +0200, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in \n> > > +/* Recovery state */\n> > > +typedef enum RecoveryState\n> > > +{\n> > > +\tNOT_IN_RECOVERY = 0,\n> > > +\tIN_CRASH_RECOVERY,\n> > > +\tIN_ARCHIVE_RECOVERY\n> > > +} RecoveryState; \n> \n> I'm not sure the complexity is required here. Are we asuume that\n> archive_mode can be changed before restarting?\n\nI assume it can yes. Eg., one can restore a PITR backup as a standby and change\nthe value of archive_mode to either off, on or always.\n\n> At Thu, 2 Apr 2020 15:49:15 +0200, Jehan-Guillaume de Rorthais\n> <jgdr@dalibo.com> wrote in \n> > > Ok, so our *current* consensus seems the followings. Right?\n> > > \n> > > - If archive_mode=off, any WAL files with .ready files are removed in\n> > > crash recovery, archive recoery and standby mode. \n> > \n> > yes \n> \n> If archive_mode = off no WAL files are marked as \".ready\".\n\nSure, on the primary side.\n\nWhat if you build a standby from a backup with archive_mode=on with\nsome .ready files in there? \n\n> > > - If archive_mode=on, WAL files with .ready files are removed only in\n> > > standby mode. In crash recovery and archive recovery cases, they keep\n> > > remaining and would be archived after recovery finishes (i.e., during\n> > > normal processing). \n> > \n> > yes\n> > \n> > > - If archive_mode=always, in crash recovery, archive recovery and\n> > > standby mode, WAL files with .ready files are archived if WAL archiver\n> > > is running. \n> > \n> > yes \n> \n> So if we assume archive_mode won't be changed after a crash before\n> restarting, if archive_mode = on on a standy, WAL files are not marked\n> as \".ready\".\n\n.ready files can be inherited from the old primary when building the standby,\ndepending on the method. See previous explanations from Fujii-san:\nhttps://www.postgresql.org/message-id/flat/ca964b3a-61a0-902e-c7b3-3abbc01a921f%40oss.nttdata.com#ddd6cbad6c5e576e2e1ae53868ca3eea\n\n> If it is \"always\", WAL files that are to be archived are\n> marked as \".ready\". Finally, the condition reduces to:\n> \n> If archiver is running, archive \".ready\" files. Otherwise ignore\n> \".ready\" and just remove WAL files after use.\n> > \n> > > That is, WAL files with .ready files are removed when either\n> > > archive_mode!=always in standby mode or archive_mode=off. \n> > \n> > sounds fine to me. \n> \n> That situation implies that archive_mode has been changed.\n\nWhy? archive_mode may have been \"always\" on the primary when eg. a snapshot has\nbeen created.\n\nRegards,\n\n\n", "msg_date": "Wed, 8 Apr 2020 15:26:03 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Hello, Jehan.\n\nAt Wed, 8 Apr 2020 15:26:03 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Wed, 08 Apr 2020 17:39:09 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> > At Tue, 7 Apr 2020 17:17:36 +0200, Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote in \n> > > > +/* Recovery state */\n> > > > +typedef enum RecoveryState\n> > > > +{\n> > > > +\tNOT_IN_RECOVERY = 0,\n> > > > +\tIN_CRASH_RECOVERY,\n> > > > +\tIN_ARCHIVE_RECOVERY\n> > > > +} RecoveryState; \n> > \n> > I'm not sure the complexity is required here. Are we asuume that\n> > archive_mode can be changed before restarting?\n> \n> I assume it can yes. Eg., one can restore a PITR backup as a standby and change\n> the value of archive_mode to either off, on or always.\n\nThanks. I was confused. The original issue was restarted master can\nmiss files in archive. To fix that, it's sufficient not ignoring\n.ready. It is more than that.\n\n> > At Thu, 2 Apr 2020 15:49:15 +0200, Jehan-Guillaume de Rorthais\n> > <jgdr@dalibo.com> wrote in \n> > > > Ok, so our *current* consensus seems the followings. Right?\n> > > > \n> > > > - If archive_mode=off, any WAL files with .ready files are removed in\n> > > > crash recovery, archive recoery and standby mode. \n> > > \n> > > yes \n> > \n> > If archive_mode = off no WAL files are marked as \".ready\".\n> \n> Sure, on the primary side.\n> \n> What if you build a standby from a backup with archive_mode=on with\n> some .ready files in there? \n\nWell. Backup doesn't have nothing in archive_status directory if it is\ntaken by pg_basebackup. If the backup is created other way, it can\nhave some (as Fujii-san mentioned). Master with archive_mode != off\nand standby with archive_mode=always should archive WAL files that are\nnot marked .done, but standby with archive_mode == on should not. The\ncommit intended that but the mistake here is it thinks that inRecovery\nrepresents whether it is running as a standby or not, but actually it\nis true on primary during crash recovery.\n\nOn the other hand, with the patch, standby with archive_mode=on\nwrongly archives WAL files during crash recovery.\n\nWhat we should check there is, as the commit was intended, not whether\nit is under crash or archive recovery, but whether it is running as\nprimary or standby.\n\n> > If it is \"always\", WAL files that are to be archived are\n> > marked as \".ready\". Finally, the condition reduces to:\n> > \n> > If archiver is running, archive \".ready\" files. Otherwise ignore\n> > \".ready\" and just remove WAL files after use.\n> > > \n> > > > That is, WAL files with .ready files are removed when either\n> > > > archive_mode!=always in standby mode or archive_mode=off. \n> > > \n> > > sounds fine to me. \n> > \n> > That situation implies that archive_mode has been changed.\n> \n> Why? archive_mode may have been \"always\" on the primary when eg. a snapshot has\n> been created.\n\n.ready files are created only when archive_mode != off.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 09 Apr 2020 11:26:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 09 Apr 2020 11:26:57 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n[...]\n> > > At Thu, 2 Apr 2020 15:49:15 +0200, Jehan-Guillaume de Rorthais\n> > > <jgdr@dalibo.com> wrote in \n> > > > > Ok, so our *current* consensus seems the followings. Right?\n> > > > > \n> > > > > - If archive_mode=off, any WAL files with .ready files are removed in\n> > > > > crash recovery, archive recoery and standby mode. \n> > > > \n> > > > yes \n> > > \n> > > If archive_mode = off no WAL files are marked as \".ready\". \n> > \n> > Sure, on the primary side.\n> > \n> > What if you build a standby from a backup with archive_mode=on with\n> > some .ready files in there? \n> \n> Well. Backup doesn't have nothing in archive_status directory if it is\n> taken by pg_basebackup. If the backup is created other way, it can\n> have some (as Fujii-san mentioned). Master with archive_mode != off\n> and standby with archive_mode=always should archive WAL files that are\n> not marked .done, but standby with archive_mode == on should not. The\n> commit intended that\n\nUnless I'm wrong, the commit avoids creating .ready files on standby when a WAL\nhas neither .done or .ready status file.\n\n> but the mistake here is it thinks that inRecovery represents whether it is\n> running as a standby or not, but actually it is true on primary during crash\n> recovery.\n\nIndeed.\n\n> On the other hand, with the patch, standby with archive_mode=on\n> wrongly archives WAL files during crash recovery.\n\n\"without the patch\" you mean? You are talking about 78ea8b5daab, right?\n\n> What we should check there is, as the commit was intended, not whether\n> it is under crash or archive recovery, but whether it is running as\n> primary or standby.\n\nYes.\n\n> > > If it is \"always\", WAL files that are to be archived are\n> > > marked as \".ready\". Finally, the condition reduces to:\n> > > \n> > > If archiver is running, archive \".ready\" files. Otherwise ignore\n> > > \".ready\" and just remove WAL files after use. \n> > > > \n> > > > > That is, WAL files with .ready files are removed when either\n> > > > > archive_mode!=always in standby mode or archive_mode=off. \n> > > > \n> > > > sounds fine to me. \n> > > \n> > > That situation implies that archive_mode has been changed. \n> > \n> > Why? archive_mode may have been \"always\" on the primary when eg. a snapshot\n> > has been created. \n> \n> .ready files are created only when archive_mode != off.\n\nYes, on a primary, they are created when archive_mode > off. On standby, when\narchive_mode=always. If a primary had archive_mode=always, a standby created\nfrom its backup will still have archive_mode=always, with no changes.\nMaybe I miss your point, sorry.\n\nRegards,\n\n\n", "msg_date": "Thu, 9 Apr 2020 11:35:12 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 8 Apr 2020 13:58:30 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n[...]\n> > May I ask why this new test is added to 011_crash_recovery.pl which is\n> > aimed at testing crash and redo, while we have 002_archiving.pl that\n> > is dedicated to archiving in a more general manner? \n> \n> I thought it was a better place because the test happen during crash recovery.\n> \n> In the meantime, while working on other tests related to $SUBJECT and the\n> current consensus, I was wondering if a new file would be a better place\n> anyway.\n\nSo, 002_archiving.pl deals more with testing recovering on hot standby side\nthan archiving. Maybe it could be renamed?\n\nWhile discussing this, I created a new file to add some more tests about WAL\narchiving and how recovery deal with them. Please, find the patch in\nattachment. I'll be able to move them elsewhere later, depending on the\nconclusions of this discussion.\n\nRegards,", "msg_date": "Thu, 9 Apr 2020 18:46:22 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "By the way, I haven't noticed that Cc: didn't contain -hackers. Added\nit.\n\n\nAt Thu, 9 Apr 2020 11:35:12 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Thu, 09 Apr 2020 11:26:57 +0900 (JST)\n> Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> [...]\n> > > > At Thu, 2 Apr 2020 15:49:15 +0200, Jehan-Guillaume de Rorthais\n> > > > <jgdr@dalibo.com> wrote in \n> > > > > > Ok, so our *current* consensus seems the followings. Right?\n> > > > > > \n> > > > > > - If archive_mode=off, any WAL files with .ready files are removed in\n> > > > > > crash recovery, archive recoery and standby mode. \n> > > > > \n> > > > > yes \n> > > > \n> > > > If archive_mode = off no WAL files are marked as \".ready\". \n> > > \n> > > Sure, on the primary side.\n> > > \n> > > What if you build a standby from a backup with archive_mode=on with\n> > > some .ready files in there? \n> > \n> > Well. Backup doesn't have nothing in archive_status directory if it is\n> > taken by pg_basebackup. If the backup is created other way, it can\n> > have some (as Fujii-san mentioned). Master with archive_mode != off\n> > and standby with archive_mode=always should archive WAL files that are\n> > not marked .done, but standby with archive_mode == on should not. The\n> > commit intended that\n> \n> Unless I'm wrong, the commit avoids creating .ready files on standby when a WAL\n> has neither .done or .ready status file.\n\nRight.\n\n> > but the mistake here is it thinks that inRecovery represents whether it is\n> > running as a standby or not, but actually it is true on primary during crash\n> > recovery.\n> \n> Indeed.\n> \n> > On the other hand, with the patch, standby with archive_mode=on\n> > wrongly archives WAL files during crash recovery.\n> \n> \"without the patch\" you mean? You are talking about 78ea8b5daab, right?\n\nNo. I menat the v4 patch in [1].\n\n[1] https://www.postgresql.org/message-id/20200407171736.61906608%40firost\n\nPrior to appllying the patch (that is the commit 78ea..),\nXLogArchiveCheckDone() correctly returns true (= allow to remove it)\nfor the same condition.\n\nThe proposed patch does the folloing thing.\n\nif (!XLogArchivingActive() ||\n recoveryState == ARCHIVE_RECOVERING && !XLogArchivingAlways())\n\treturn true;\n\nIt doesn't return for the condition \"recoveryState=CRASH_RECOVERING\nand archive_mode = on\". Then the WAL files is mitakenly marked as\n\".ready\" if not marked yet.\n\nBy the way, the code seems not following the convention a bit\nhere. Let the inserting code be in the same style to the existing code\naround.\n\n+\tif ( ! XLogArchivingActive() )\n\nI think we don't put the spaces within the parentheses above.\n\n| ARCHIVE_RECOVERING/CRASH_RECOVERING/NOT_IN_RECOVERY\n\nThe first two and the last one are in different style. *I* prefer them\n(if we use it) be \"IN_ARCHIVE_RECOVERY/IN_CRASH_RECOVERY/NOT_IN_RECOVERY\".\n\n> > What we should check there is, as the commit was intended, not whether\n> > it is under crash or archive recovery, but whether it is running as\n> > primary or standby.\n> \n> Yes.\n> \n> > > > If it is \"always\", WAL files that are to be archived are\n> > > > marked as \".ready\". Finally, the condition reduces to:\n> > > > \n> > > > If archiver is running, archive \".ready\" files. Otherwise ignore\n> > > > \".ready\" and just remove WAL files after use. \n> > > > > \n> > > > > > That is, WAL files with .ready files are removed when either\n> > > > > > archive_mode!=always in standby mode or archive_mode=off. \n> > > > > \n> > > > > sounds fine to me. \n> > > > \n> > > > That situation implies that archive_mode has been changed. \n> > > \n> > > Why? archive_mode may have been \"always\" on the primary when eg. a snapshot\n> > > has been created. \n> > \n> > .ready files are created only when archive_mode != off.\n> \n> Yes, on a primary, they are created when archive_mode > off. On standby, when\n> archive_mode=always. If a primary had archive_mode=always, a standby created\n> from its backup will still have archive_mode=always, with no changes.\n> Maybe I miss your point, sorry.\n\nSorry, it was ambiguous.\n\n> That is, WAL files with .ready files are removed when either\n> archive_mode!=always in standby mode or archive_mode=off. \n\nIf we have .ready file when archive_mode = off, the cluster (or the\noriginal of the copy cluster) should have been running in archive = on\nor always. That is, archive_mode has been changed. But anyway that\ndiscussion would not be in much relevance.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:00:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Thu, 9 Apr 2020 18:46:22 +0200, Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote in \n> On Wed, 8 Apr 2020 13:58:30 +0200\n> Jehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n> [...]\n> > > May I ask why this new test is added to 011_crash_recovery.pl which is\n> > > aimed at testing crash and redo, while we have 002_archiving.pl that\n> > > is dedicated to archiving in a more general manner? \n> > \n> > I thought it was a better place because the test happen during crash recovery.\n> > \n> > In the meantime, while working on other tests related to $SUBJECT and the\n> > current consensus, I was wondering if a new file would be a better place\n> > anyway.\n> \n> So, 002_archiving.pl deals more with testing recovering on hot standby side\n> than archiving. Maybe it could be renamed?\n\nI have the same feeling with Michael. The test that archives are\ncreated correctly seems to fit the file. It would be unintentionally\nthat the file is not exercising archiving so much.\n\n> While discussing this, I created a new file to add some more tests about WAL\n> archiving and how recovery deal with them. Please, find the patch in\n> attachment. I'll be able to move them elsewhere later, depending on the\n> conclusions of this discussion.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 10 Apr 2020 11:14:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, Apr 10, 2020 at 11:14:54AM +0900, Kyotaro Horiguchi wrote:\n> I have the same feeling with Michael. The test that archives are\n> created correctly seems to fit the file. It would be unintentionally\n> that the file is not exercising archiving so much.\n\nI have been finally looking at this thread and the latest patch set,\nsorry for the late reply.\n\n XLogCtl->XLogCacheBlck = XLOGbuffers - 1;\n- XLogCtl->SharedRecoveryInProgress = true;\n XLogCtl->SharedHotStandbyActive = false;\n XLogCtl->SharedPromoteIsTriggered = false;\n XLogCtl->WalWriterSleeping = false;\n[...]\n+ switch (ControlFile->state)\n+ {\n+ case DB_SHUTDOWNED:\n+ case DB_SHUTDOWNED_IN_RECOVERY:\n+ XLogCtl->SharedRecoveryState = ARCHIVE_RECOVERING;\n+ break;\n+ default:\n+ XLogCtl->SharedRecoveryState = CRASH_RECOVERING;\n+ }\nIt seems to me that the initial value of SharedRecoveryState should be\nCRASH_RECOVERING all the time no? StartupXLOG() is a code path taken\neven if starting cleanly, and the flag would be reset correctly at the\nend to NOT_IN_RECOVERY.\n\n+typedef enum RecoveryState\n+{\n+ NOT_IN_RECOVERY = 0, /* currently in production */\n+ CRASH_RECOVERING, /* recovering from a crash */\n+ ARCHIVE_RECOVERING /* recovering archives as requested */\n+} RecoveryState;\nI also have some issues with the name of those variables, here is an\nidea for the three states:\n- RECOVERY_STATE_CRASH\n- RECOVERT_STATE_ARCHIVE\n- RECOVERY_STATE_NONE\nI would recommend to use the same suffix for those variables to ease\ngrepping.\n\n /*\n- * Local copy of SharedRecoveryInProgress variable. True actually means \"not\n- * known, need to check the shared state\".\n+ * This is false when SharedRecoveryState is not ARCHIVE_RECOVERING.\n+ * True actually means \"not known, need to check the shared state\".\n */\nA double negation sounds wrong to me. And actually this variable is\nfalse when the shared state is set to NOT_IN_RECOVERY, and true when\nthe state is either CRASH_RECOVERING or ARCHIVE_RECOVERING because it\nmeans that recovery is running, be it archive recovery or crash\nrecovery, so the comment is wrong.\n\n+ /* The file is always deletable if archive_mode is \"off\". */\n+ if ( ! XLogArchivingActive() )\n+ return true;\n[...]\n+ if ( recoveryState == ARCHIVE_RECOVERING && !XLogArchivingAlways() )\n return true;\nIncorrect indentation.\n\n UpdateControlFile();\n+\n+ SpinLockAcquire(&XLogCtl->info_lck);\n+ XLogCtl->SharedRecoveryState = ARCHIVE_RECOVERING;\n+ SpinLockRelease(&XLogCtl->info_lck);\n+\n LWLockRelease(ControlFileLock);\nHere, the shared flag is updated while holding ControlFileLock to\nensure a consistent state of things within shared memory, so it would\nbe nice to add a comment about that.\n\n+RecoveryState\n+GetRecoveryState(void)\n+{\n+ volatile XLogCtlData *xlogctl = XLogCtl;\n+\n+ return xlogctl->SharedRecoveryState;\n+}\nEr, you need to acquire info_lck to look at the state here, no?\n\n /*\n- * The file is always deletable if archive_mode is \"off\". On standbys\n- * archiving is disabled if archive_mode is \"on\", and enabled with\n- * \"always\". On a primary, archiving is enabled if archive_mode is \"on\"\n- * or \"always\".\n+ * On standbys, the file is deletable if archive_mode is not\n+ * \"always\".\n */\nIt would be good to mention that while in crash recovery, files are\nnot considered as deletable.\n\nI agree that having a separate .pl file for the tests of this thread\nis just cleaner. Here are more comments about these.\n\n+# temporary fail archive_command for futur tests\n+$node->safe_psql('postgres', q{\n+ ALTER SYSTEM SET archive_command TO 'false';\n+ SELECT pg_reload_conf();\n+});\nThat's likely portable on Windows even if you skip the tests there,\nand I am not sure that it is a good idea to rely on it being in PATH.\nDepending on the system, the path of the command is also likely going\nto be different. As here the goal is to prevent the archiver to do\nits work, why not relying on the configuration where archive_mode is\nset but archive_command is not? This would cause the archiver to be a\nno-op process, and .ready files will remain around. You could then\nreplace the lookup of pg_stat_archiver with poll_query_until() and a\nquery that makes use of pg_stat_file() to make sure that the .ready\nexists when needed.\n\n+ ok( -f \"$node_data/pg_wal/archive_status/000000010000000000000001.ready\",\n+ \"WAL still ready to archive in archive_status\");\nIt would be good to mention in the description the check applies to a\nprimary.\n\n+# test recovery without archive_mode=always does not keep .ready WALs\n+$standby1 = get_new_node('standby');\n+$standby1->init_from_backup($node, 'backup', has_restoring => 1);\n+$standby1_data = $standby1->data_dir;\n+$standby1->start;\n+$standby1->safe_psql('postgres', q{CHECKPOINT});\nFor readability archive_mode = on should be added to the configuration\nfile? Okay, this is inherited from the primary, still that would\navoid any issues with this code is refactored in some way.\n\n\"WAL waiting to be archived in backup removed with archive_mode=on\n on standby\" ); \nThat should be \"WAL segment\" or \"WAL file\", but not WAL.\n\nRegarding the tests on a standby, it seems to me that the following\nis necessary:\n1) Test that with archive_mode = on, segments are not marked with\n.ready.\n2) Test that with archive_mode = always, segments are marked with\n.ready during archive recovery.\n3) Test that with archive_mode = always, segments are not removed\nduring crash recovery.\nI can see tests for 1) and 2), but not 3). Could you add a\nstop('immediate')+start() for $standby2 at the end of\n020_archive_status.pl and check that the .ready file is still there\nafter crash recovery? The end of the tests actually relies on the\nfact that archive_command is set to \"false\" when the cold backup is\ntaken, before resetting it. I think that it would be cleaner to\nenforce the configuration you want to test before starting each\nstandby. It becomes easier to understand the flow of the test for the\nreader.\n--\nMichael", "msg_date": "Mon, 13 Apr 2020 16:14:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, 13 Apr 2020 16:14:14 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n[...]\n> XLogCtl->XLogCacheBlck = XLOGbuffers - 1;\n> - XLogCtl->SharedRecoveryInProgress = true;\n> XLogCtl->SharedHotStandbyActive = false;\n> XLogCtl->SharedPromoteIsTriggered = false;\n> XLogCtl->WalWriterSleeping = false;\n> [...]\n> + switch (ControlFile->state)\n> + {\n> + case DB_SHUTDOWNED:\n> + case DB_SHUTDOWNED_IN_RECOVERY:\n> + XLogCtl->SharedRecoveryState = ARCHIVE_RECOVERING;\n> + break;\n> + default:\n> + XLogCtl->SharedRecoveryState = CRASH_RECOVERING;\n> + }\n> It seems to me that the initial value of SharedRecoveryState should be\n> CRASH_RECOVERING all the time no? StartupXLOG() is a code path taken\n> even if starting cleanly, and the flag would be reset correctly at the\n> end to NOT_IN_RECOVERY.\n\nPrevious version of the patch was setting CRASH_RECOVERING. Fujii-san reported\n(the 4 Apr 2020 02:49:50 +0900) that it could be useful to expose a better value\nuntil relevant code is reached in StartupXLOG() so GetRecoveryState() returns\na safer value for futur use.\n\nAs I answered upthread, I'm not sure who would need this information before the\nWAL machinery is up though. Note that ARCHIVE_RECOVERING and CRASH_RECOVERING\nare compatible with the previous behavior.\n\nMaybe the solution would be to init with CRASH_RECOVERING and add some comment\nin GetRecoveryState() to warn the state is \"enforced\" after the XLOG machinery\nis started and is init'ed to RECOVERING in the meantime?\n\nI initialized it to CRASH_RECOVERING in the new v5 patch and added a comment\nto GetRecoveryState().\n\n> +typedef enum RecoveryState\n> +{\n> + NOT_IN_RECOVERY = 0, /* currently in production */\n> + CRASH_RECOVERING, /* recovering from a crash */\n> + ARCHIVE_RECOVERING /* recovering archives as requested */\n> +} RecoveryState;\n> I also have some issues with the name of those variables, here is an\n> idea for the three states:\n> - RECOVERY_STATE_CRASH\n> - RECOVERT_STATE_ARCHIVE\n> - RECOVERY_STATE_NONE\n> I would recommend to use the same suffix for those variables to ease\n> grepping.\n\nSounds really good to me. Thanks!\n\n> /*\n> - * Local copy of SharedRecoveryInProgress variable. True actually means \"not\n> - * known, need to check the shared state\".\n> + * This is false when SharedRecoveryState is not ARCHIVE_RECOVERING.\n> + * True actually means \"not known, need to check the shared state\".\n> */\n> A double negation sounds wrong to me. And actually this variable is\n> false when the shared state is set to NOT_IN_RECOVERY, and true when\n> the state is either CRASH_RECOVERING or ARCHIVE_RECOVERING because it\n> means that recovery is running, be it archive recovery or crash\n> recovery, so the comment is wrong.\n\nIndeed, sorry. Fixed.\n\n> + /* The file is always deletable if archive_mode is \"off\". */\n> + if ( ! XLogArchivingActive() )\n> + return true;\n> [...]\n> + if ( recoveryState == ARCHIVE_RECOVERING && !XLogArchivingAlways() )\n> return true;\n> Incorrect indentation.\n\nIs it the spaces as reported by Horiguchi-san? I removed them in latest patch.\n\n> UpdateControlFile();\n> +\n> + SpinLockAcquire(&XLogCtl->info_lck);\n> + XLogCtl->SharedRecoveryState = ARCHIVE_RECOVERING;\n> + SpinLockRelease(&XLogCtl->info_lck);\n> +\n> LWLockRelease(ControlFileLock);\n> Here, the shared flag is updated while holding ControlFileLock to\n> ensure a consistent state of things within shared memory, so it would\n> be nice to add a comment about that.\n\nIndeed. the original code had no such comment and I asked myself the same\nquestion. Added.\n\n> +RecoveryState\n> +GetRecoveryState(void)\n> +{\n> + volatile XLogCtlData *xlogctl = XLogCtl;\n> +\n> + return xlogctl->SharedRecoveryState;\n> +}\n> Er, you need to acquire info_lck to look at the state here, no?\n\nYes, fixed.\n\n> /*\n> - * The file is always deletable if archive_mode is \"off\". On standbys\n> - * archiving is disabled if archive_mode is \"on\", and enabled with\n> - * \"always\". On a primary, archiving is enabled if archive_mode is \"on\"\n> - * or \"always\".\n> + * On standbys, the file is deletable if archive_mode is not\n> + * \"always\".\n> */\n> It would be good to mention that while in crash recovery, files are\n> not considered as deletable.\n\nWell, in fact, I am still wondering about this. I was hesitant to add a\nshortcut like:\n\n /* no WAL segment cleanup during crash recovery */\n if (recoveryState == RECOVERT_STATE_CRASH)\n return false;\n\nBut, what if for example we crashed for lack of disk space during intensive\nwirtes? During crash recovery, any WAL marked as .done could be removed and\nallow the system to start again and maybe make even further WAL cleanup by\narchiving some more WAL without competing with previous high write ratio.\n\nWhen we recover a primary, this behavior seems conform with any value of\narchive_mode, even if it has been changed after crash and before starting it.\nOn a standby, we might create .ready files, but they will be removed during the\nfirst restartpoint if needed.\n\n> I agree that having a separate .pl file for the tests of this thread\n> is just cleaner. Here are more comments about these.\n> \n> +# temporary fail archive_command for futur tests\n> +$node->safe_psql('postgres', q{\n> + ALTER SYSTEM SET archive_command TO 'false';\n> + SELECT pg_reload_conf();\n> +});\n> That's likely portable on Windows even if you skip the tests there,\n> and I am not sure that it is a good idea to rely on it being in PATH.\n> Depending on the system, the path of the command is also likely going\n> to be different. As here the goal is to prevent the archiver to do\n> its work, why not relying on the configuration where archive_mode is\n> set but archive_command is not? This would cause the archiver to be a\n> no-op process, and .ready files will remain around. You could then\n> replace the lookup of pg_stat_archiver with poll_query_until() and a\n> query that makes use of pg_stat_file() to make sure that the .ready\n> exists when needed.\n\nI needed a failure so I can test pg_stat_archiver reports it as well. In my\nmind, using \"false\" would either trigger a failure because false returns 1 or...\na failure because the command is not found. In either way, the result is the\nsame.\n\nUsing poll_query_until+pg_stat_file, is a good idea, but not enough as\narchiver reports a failure some moment after the .ready signal file appears.\n\n> + ok( -f \"$node_data/pg_wal/archive_status/000000010000000000000001.ready\",\n> + \"WAL still ready to archive in archive_status\");\n> It would be good to mention in the description the check applies to a\n> primary.\n\ndone\n\n> +# test recovery without archive_mode=always does not keep .ready WALs\n> +$standby1 = get_new_node('standby');\n> +$standby1->init_from_backup($node, 'backup', has_restoring => 1);\n> +$standby1_data = $standby1->data_dir;\n> +$standby1->start;\n> +$standby1->safe_psql('postgres', q{CHECKPOINT});\n> For readability archive_mode = on should be added to the configuration\n> file? Okay, this is inherited from the primary, still that would\n> avoid any issues with this code is refactored in some way.\n\nadded\n\n> \"WAL waiting to be archived in backup removed with archive_mode=on\n> on\n> standby\" ); That should be \"WAL segment\" or \"WAL file\", but not WAL.\n\nupdated everywhere.\n\n> Regarding the tests on a standby, it seems to me that the following\n> is necessary:\n> 1) Test that with archive_mode = on, segments are not marked with\n> .ready.\n> 2) Test that with archive_mode = always, segments are marked with\n> .ready during archive recovery.\n> 3) Test that with archive_mode = always, segments are not removed\n> during crash recovery.\n> I can see tests for 1) and 2),\n\nNot really. The current tests does not check that segments created *after* the\nbackup are marked or not with .ready file on standby. I added these tests plus\nsome various other ones.\n\n> but not 3). Could you add a\n> stop('immediate')+start() for $standby2 at the end of\n> 020_archive_status.pl and check that the .ready file is still there\n> after crash recovery?\n\ndone.\n\n> The end of the tests actually relies on the\n> fact that archive_command is set to \"false\" when the cold backup is\n> taken, before resetting it. I think that it would be cleaner to\n> enforce the configuration you want to test before starting each\n> standby. It becomes easier to understand the flow of the test for the\n> reader.\n\ndone as well.\n\nThank you for your review!", "msg_date": "Tue, 14 Apr 2020 18:03:19 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, 10 Apr 2020 11:00:31 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n[...]\n> > > but the mistake here is it thinks that inRecovery represents whether it is\n> > > running as a standby or not, but actually it is true on primary during\n> > > crash recovery. \n> > \n> > Indeed.\n> > \n> > > On the other hand, with the patch, standby with archive_mode=on\n> > > wrongly archives WAL files during crash recovery. \n> > \n> > \"without the patch\" you mean? You are talking about 78ea8b5daab, right? \n> \n> No. I menat the v4 patch in [1].\n> \n> [1] https://www.postgresql.org/message-id/20200407171736.61906608%40firost\n> \n> Prior to appllying the patch (that is the commit 78ea..),\n> XLogArchiveCheckDone() correctly returns true (= allow to remove it)\n> for the same condition.\n> \n> The proposed patch does the folloing thing.\n> \n> if (!XLogArchivingActive() ||\n> recoveryState == ARCHIVE_RECOVERING && !XLogArchivingAlways())\n> \treturn true;\n> \n> It doesn't return for the condition \"recoveryState=CRASH_RECOVERING\n> and archive_mode = on\". Then the WAL files is mitakenly marked as\n> \".ready\" if not marked yet.\n\nIndeed.\n\nBut .ready files are then deleted during the first restartpoint. I'm not sure\nhow to fix this behavior without making the code too complex.\n\nThis is discussed in my last answer to Michael few minutes ago as well.\n\n> By the way, the code seems not following the convention a bit\n> here. Let the inserting code be in the same style to the existing code\n> around.\n> \n> +\tif ( ! XLogArchivingActive() )\n> \n> I think we don't put the spaces within the parentheses above.\n\nIndeed, This is fixed in patch v5 sent few minutes ago.\n\n> | ARCHIVE_RECOVERING/CRASH_RECOVERING/NOT_IN_RECOVERY\n> \n> The first two and the last one are in different style. *I* prefer them\n> (if we use it) be \"IN_ARCHIVE_RECOVERY/IN_CRASH_RECOVERY/NOT_IN_RECOVERY\".\n\nI like Michael's proposal. See v5 of the patch.\n\nThank you for your review!\n\nRegards,\n\n\n", "msg_date": "Tue, 14 Apr 2020 18:09:23 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, Apr 14, 2020 at 06:03:19PM +0200, Jehan-Guillaume de Rorthais wrote:\n\nThanks for the new version.\n\n> On Mon, 13 Apr 2020 16:14:14 +0900 Michael Paquier <michael@paquier.xyz> wrote:\n>> It seems to me that the initial value of SharedRecoveryState should be\n>> CRASH_RECOVERING all the time no? StartupXLOG() is a code path taken\n>> even if starting cleanly, and the flag would be reset correctly at the\n>> end to NOT_IN_RECOVERY.\n> \n> Previous version of the patch was setting CRASH_RECOVERING. Fujii-san reported\n> (the 4 Apr 2020 02:49:50 +0900) that it could be useful to expose a better value\n> until relevant code is reached in StartupXLOG() so GetRecoveryState() returns\n> a safer value for futur use.\n>\n> As I answered upthread, I'm not sure who would need this information before the\n> WAL machinery is up though. Note that ARCHIVE_RECOVERING and CRASH_RECOVERING\n> are compatible with the previous behavior.\n\nI am not sure either if we need this information until the startup\nprocess is up, but even if we need it I'd rather keep the code\nconsistent with the past practice, which was that\nSharedRecoveryInProgress got set to true, the equivalent of crash\nrecovery as that's more generic than the archive recovery switch.\n\n> Maybe the solution would be to init with CRASH_RECOVERING and add some comment\n> in GetRecoveryState() to warn the state is \"enforced\" after the XLOG machinery\n> is started and is init'ed to RECOVERING in the meantime?\n> \n> I initialized it to CRASH_RECOVERING in the new v5 patch and added a comment\n> to GetRecoveryState().\n\nNot sure that the comment is worth it. Your description of the state\nlooks enough, and the code is clear that we have just an initial\nshared memory state in this case.\n\n>> + /* The file is always deletable if archive_mode is \"off\". */\n>> + if ( ! XLogArchivingActive() )\n>> + return true;\n>> [...]\n>> + if ( recoveryState == ARCHIVE_RECOVERING && !XLogArchivingAlways() )\n>> return true;\n>> Incorrect indentation.\n> \n> Is it the spaces as reported by Horiguchi-san? I removed them in latest patch.\n\nYes.\n\n>> It would be good to mention that while in crash recovery, files are\n>> not considered as deletable.\n> \n> Well, in fact, I am still wondering about this. I was hesitant to add a\n> shortcut like:\n> \n> /* no WAL segment cleanup during crash recovery */\n> if (recoveryState == RECOVERT_STATE_CRASH)\n> return false;\n> \n> But, what if for example we crashed for lack of disk space during intensive\n> writes? During crash recovery, any WAL marked as .done could be removed and\n> allow the system to start again and maybe make even further WAL cleanup by\n> archiving some more WAL without competing with previous high write ratio.\n>\n> When we recover a primary, this behavior seems conform with any value of\n> archive_mode, even if it has been changed after crash and before starting it.\n> On a standby, we might create .ready files, but they will be removed during the\n> first restartpoint if needed.\n\nI guess that you mean .ready and not .done here? Removing .done files\ndoes not matter as the segments related to them are already gone.\nEven with that, why should we need to make the backend smarter about\nthe removal of .ready files during crash recovery. It seems to me\nthat we should keep them, and an operator could always come by himself\nand do some manual cleanup to free some space in the pg_wal partition.\n\n> I needed a failure so I can test pg_stat_archiver reports it as well. In my\n> mind, using \"false\" would either trigger a failure because false returns 1 or...\n> a failure because the command is not found. In either way, the result is the\n> same.\n> \n> Using poll_query_until+pg_stat_file, is a good idea, but not enough as\n> archiver reports a failure some moment after the .ready signal file appears.\n\nUsing an empty string makes the test more portable, but while I looked\nat it I have found out that it leads to delays in the archiver except\nif you force the generation of more segments in the test, causing the\nlogic to get more complicated with the manipulation of the .ready and\n.done files. And I was then finding myself to add an\narchive_timeout.. Anyway, this reduced the readability of the test so\nI am pretty much giving up on this idea.\n\n>> The end of the tests actually relies on the\n>> fact that archive_command is set to \"false\" when the cold backup is\n>> taken, before resetting it. I think that it would be cleaner to\n>> enforce the configuration you want to test before starting each\n>> standby. It becomes easier to understand the flow of the test for the\n>> reader.\n> \n> done as well.\n\nI have put my hands on the code, and attached is a cleaned up\nversion for the backend part. Below are some notes.\n\n+ * RECOVERT_STATE_ARCHIVE is set for archive recovery or for a\nstandby.\nTypo here that actually comes from my previous email, and that you\nblindly copy-pasted, repeated five times in the tree actually.\n\n+ RecoveryState recoveryState = GetRecoveryState();\n+\n+ /* The file is always deletable if archive_mode is \"off\". */\n+ if (!XLogArchivingActive())\n+ return true;\nThere is no point to call GetRecoveryState() is archive_mode = off.\n\n+ * There's two different reasons to recover WAL: when standby mode is\nrequested\n+ * or after a crash to catchup with consistency.\nNit: s/There's/There are/. Anyway, I don't see much point in keeping\nthis comment as the description of each state value should be enough,\nso I have removed it.\n\nI am currently in the middle of reviewing the test and there are a\ncouple of things that can be improved but I lack of time today, so\nI'll continue tomorrow on it. There is no need to send two separate\npatches by the way as the code paths touched are different.\n--\nMichael", "msg_date": "Thu, 16 Apr 2020 17:11:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 16 Apr 2020 17:11:00 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Apr 14, 2020 at 06:03:19PM +0200, Jehan-Guillaume de Rorthais wrote:\n> \n> Thanks for the new version.\n\nThank you for v6.\n\n> > On Mon, 13 Apr 2020 16:14:14 +0900 Michael Paquier <michael@paquier.xyz>\n> > wrote: \n> >> It seems to me that the initial value of SharedRecoveryState should be\n> >> CRASH_RECOVERING all the time no? StartupXLOG() is a code path taken\n> >> even if starting cleanly, and the flag would be reset correctly at the\n> >> end to NOT_IN_RECOVERY. \n> > \n> > Previous version of the patch was setting CRASH_RECOVERING. Fujii-san\n> > reported (the 4 Apr 2020 02:49:50 +0900) that it could be useful to expose\n> > a better value until relevant code is reached in StartupXLOG() so\n> > GetRecoveryState() returns a safer value for futur use.\n> >\n> > As I answered upthread, I'm not sure who would need this information before\n> > the WAL machinery is up though. Note that ARCHIVE_RECOVERING and\n> > CRASH_RECOVERING are compatible with the previous behavior. \n> \n> I am not sure either if we need this information until the startup\n> process is up, but even if we need it I'd rather keep the code\n> consistent with the past practice, which was that\n> SharedRecoveryInProgress got set to true, the equivalent of crash\n> recovery as that's more generic than the archive recovery switch.\n\nOK.\n\n> > Maybe the solution would be to init with CRASH_RECOVERING and add some\n> > comment in GetRecoveryState() to warn the state is \"enforced\" after the\n> > XLOG machinery is started and is init'ed to RECOVERING in the meantime?\n> > \n> > I initialized it to CRASH_RECOVERING in the new v5 patch and added a comment\n> > to GetRecoveryState(). \n> \n> Not sure that the comment is worth it. Your description of the state\n> looks enough, and the code is clear that we have just an initial\n> shared memory state in this case.\n\nOK. Will remove later after your review of the tests.\n\n> >> It would be good to mention that while in crash recovery, files are\n> >> not considered as deletable. \n> > \n> > Well, in fact, I am still wondering about this. I was hesitant to add a\n> > shortcut like:\n> > \n> > /* no WAL segment cleanup during crash recovery */\n> > if (recoveryState == RECOVERT_STATE_CRASH)\n> > return false;\n> > \n> > But, what if for example we crashed for lack of disk space during intensive\n> > writes? During crash recovery, any WAL marked as .done could be removed and\n> > allow the system to start again and maybe make even further WAL cleanup by\n> > archiving some more WAL without competing with previous high write ratio.\n> >\n> > When we recover a primary, this behavior seems conform with any value of\n> > archive_mode, even if it has been changed after crash and before starting\n> > it. On a standby, we might create .ready files, but they will be removed\n> > during the first restartpoint if needed. \n> \n> I guess that you mean .ready and not .done here?\n\nNo, I meant .done.\n\n> Removing .done files does not matter as the segments related to them are\n> already gone.\n\nNot necessarily. There's a time windows between the moment the archiver set\nthe .done file and when the checkpointer removes the associated WAL file.\nSo, after a PANIC because of lack of space, WAL associated with .done files but\nnot removed yet will be removed during the crash recovery.\n\n> Even with that, why should we need to make the backend smarter about\n> the removal of .ready files during crash recovery. It seems to me\n> that we should keep them, and an operator could always come by himself\n> and do some manual cleanup to free some space in the pg_wal partition.\n\nWe are agree on this.\n\n> > I needed a failure so I can test pg_stat_archiver reports it as well. In my\n> > mind, using \"false\" would either trigger a failure because false returns 1\n> > or... a failure because the command is not found. In either way, the result\n> > is the same.\n> > \n> > Using poll_query_until+pg_stat_file, is a good idea, but not enough as\n> > archiver reports a failure some moment after the .ready signal file\n> > appears. \n> \n> Using an empty string makes the test more portable, but while I looked\n> at it I have found out that it leads to delays in the archiver except\n> if you force the generation of more segments in the test, causing the\n> logic to get more complicated with the manipulation of the .ready and\n> .done files. And I was then finding myself to add an\n> archive_timeout.. Anyway, this reduced the readability of the test so\n> I am pretty much giving up on this idea.\n\nUnless I'm wrong, the empty string does not raise an error in pg_stat_archiver,\nand I wanted to add a test on this as well.\n\n> >> The end of the tests actually relies on the\n> >> fact that archive_command is set to \"false\" when the cold backup is\n> >> taken, before resetting it. I think that it would be cleaner to\n> >> enforce the configuration you want to test before starting each\n> >> standby. It becomes easier to understand the flow of the test for the\n> >> reader. \n> > \n> > done as well. \n> \n> I have put my hands on the code, and attached is a cleaned up\n> version for the backend part. Below are some notes.\n> \n> + * RECOVERT_STATE_ARCHIVE is set for archive recovery or for a\n> standby.\n> Typo here that actually comes from my previous email, and that you\n> blindly copy-pasted, repeated five times in the tree actually.\n\nOops...yes, even in a comment with RECOVERT_STATE_NONE :/\nSorry.\n\n> + RecoveryState recoveryState = GetRecoveryState();\n> +\n> + /* The file is always deletable if archive_mode is \"off\". */\n> + if (!XLogArchivingActive())\n> + return true;\n> There is no point to call GetRecoveryState() is archive_mode = off.\n\ngood catch!\n\n> + * There's two different reasons to recover WAL: when standby mode is\n> requested\n> + * or after a crash to catchup with consistency.\n> Nit: s/There's/There are/. Anyway, I don't see much point in keeping\n> this comment as the description of each state value should be enough,\n> so I have removed it.\n\nOK\n\n> I am currently in the middle of reviewing the test and there are a\n> couple of things that can be improved but I lack of time today, so\n> I'll continue tomorrow on it. There is no need to send two separate\n> patches by the way as the code paths touched are different.\n\nOK.\n\nThanks for your review! Let me know if you want me to add/change/fix some tests.\n\nRegards,\n\n\n", "msg_date": "Fri, 17 Apr 2020 00:07:39 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, Apr 17, 2020 at 12:07:39AM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Thu, 16 Apr 2020 17:11:00 +0900 Michael Paquier <michael@paquier.xyz> wrote:\n>> Removing .done files does not matter as the segments related to them are\n>> already gone.\n> \n> Not necessarily. There's a time windows between the moment the archiver set\n> the .done file and when the checkpointer removes the associated WAL file.\n> So, after a PANIC because of lack of space, WAL associated with .done files but\n> not removed yet will be removed during the crash recovery.\n\nNot sure that it is something that matters for this thread though, so\nif necessary I think that it could be be discussed separately.\n\n>> Even with that, why should we need to make the backend smarter about\n>> the removal of .ready files during crash recovery. It seems to me\n>> that we should keep them, and an operator could always come by himself\n>> and do some manual cleanup to free some space in the pg_wal partition.\n> \n> We are agree on this.\n\nOkay.\n\n> Unless I'm wrong, the empty string does not raise an error in pg_stat_archiver,\n> and I wanted to add a test on this as well.\n\nExactly, it won't raise an error. Instead I switched to use a poll\nquery with pg_stat_file() and .ready files, but this has proved to\ndelay the test considerably if we did not create more segments. And\nyour approach has the merit to be more simple with only two segments\nmanipulated for the whole test. So I have tried first my idea,\nnoticed the mess it introduced, and just kept your approach.\n\n> Thanks for your review! Let me know if you want me to add/change/fix some tests.\n\nThanks, I have worked more on the test, refactoring pieces related to\nthe segment names, adjusting some comments and fixing some of the\nlogic. Note that you introduced something incorrect at the creation\nof $standby2 as you have been updating postgresql.conf.auto for\n$standby1.\n\nI have noticed an extra issue while looking at the backend pieces\ntoday: at the beginning of the REDO loop we forgot one place where\nSharedRecoveryState *has* to be updated to a correct state (around\nthe comment \"Update pg_control to show that we are...\" in xlog.c) as\nthe startup process may decide to switch the control file state to\nDB_IN_ARCHIVE_RECOVERY or DB_IN_CRASH_RECOVERY, but we forgot to\nupdate the new shared flag at this early stage. It did not matter\nbefore because SharedRecoveryInProgress would be only \"true\" for both,\nbut that's not the case anymore as we need to make the difference\nbetween crash recovery and archive recovery in the new flag. There is\nno actual need to update SharedRecoveryState to RECOVERY_STATE_CRASH\nas the initial shared memory state is RECOVERY_STATE_CRASH, but\nupdating the flag makes the code more consistent IMO so I updated it\nanyway in the attached.\n\nI have the feeling that I need to work a bit more on this patch, but\nmy impression is that we are getting to something committable here.\n\nThoughts?\n--\nMichael", "msg_date": "Fri, 17 Apr 2020 15:50:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, 17 Apr 2020 15:50:43 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Apr 17, 2020 at 12:07:39AM +0200, Jehan-Guillaume de Rorthais wrote:\n> > On Thu, 16 Apr 2020 17:11:00 +0900 Michael Paquier <michael@paquier.xyz>\n> > wrote: \n> >> Removing .done files does not matter as the segments related to them are\n> >> already gone. \n> > \n> > Not necessarily. There's a time windows between the moment the archiver set\n> > the .done file and when the checkpointer removes the associated WAL file.\n> > So, after a PANIC because of lack of space, WAL associated with .done files\n> > but not removed yet will be removed during the crash recovery. \n> \n> Not sure that it is something that matters for this thread though, so\n> if necessary I think that it could be be discussed separately.\n\nOK. However, unless I'm wrong, what I am describing as an desired behavior\nis the current behavior of XLogArchiveCheckDone. So, we might want to decide if\nv8 should return false during crash recovery no matter the archive_mode setup,\nor if we keep the curent behavior. I vote for keeping it this way.\n\n[...]\n> > Unless I'm wrong, the empty string does not raise an error in\n> > pg_stat_archiver, and I wanted to add a test on this as well. \n> \n> Exactly, it won't raise an error. Instead I switched to use a poll\n> query with pg_stat_file() and .ready files, but this has proved to\n> delay the test considerably if we did not create more segments. And\n> your approach has the merit to be more simple with only two segments\n> manipulated for the whole test. So I have tried first my idea,\n> noticed the mess it introduced, and just kept your approach.\n\nMaybe we could use something more common for all plateform? Eg.:\n\n archive_command='this command does not exist'\n\nAt least, we would have the same error everywhere, as far as it could matter...\n\n> > Thanks for your review! Let me know if you want me to add/change/fix some\n> > tests. \n> \n> Thanks, I have worked more on the test, refactoring pieces related to\n> the segment names, adjusting some comments and fixing some of the\n> logic. Note that you introduced something incorrect at the creation\n> of $standby2 as you have been updating postgresql.conf.auto for\n> $standby1.\n\nerf, last minute quick edit with lack of review on my side :(\n\n> I have noticed an extra issue while looking at the backend pieces\n> today: at the beginning of the REDO loop we forgot one place where\n> SharedRecoveryState *has* to be updated to a correct state (around\n> the comment \"Update pg_control to show that we are...\" in xlog.c) as\n> the startup process may decide to switch the control file state to\n> DB_IN_ARCHIVE_RECOVERY or DB_IN_CRASH_RECOVERY, but we forgot to\n> update the new shared flag at this early stage. It did not matter\n> before because SharedRecoveryInProgress would be only \"true\" for both,\n> but that's not the case anymore as we need to make the difference\n> between crash recovery and archive recovery in the new flag. There is\n> no actual need to update SharedRecoveryState to RECOVERY_STATE_CRASH\n> as the initial shared memory state is RECOVERY_STATE_CRASH, but\n> updating the flag makes the code more consistent IMCRASHO so I updated it\n> anyway in the attached.\n\nGrmbl...I had this logic the other way around: init with\nRECOVERY_STATE_RECOVERY and set to CRASH in this exact if/then/else block... I\nremoved it in v4 when setting XLogCtl->SharedRecoveryState to RECOVERY or CRASH\nbased on ControlFile->state.\n\nSorry, I forgot it after discussing the init value in v5 :(\n\nRegards,\n\n\n", "msg_date": "Fri, 17 Apr 2020 15:33:04 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, Apr 17, 2020 at 03:33:04PM +0200, Jehan-Guillaume de Rorthais wrote:\n> On Fri, 17 Apr 2020 15:50:43 +0900 Michael Paquier <michael@paquier.xyz> wrote:\n>> Not sure that it is something that matters for this thread though, so\n>> if necessary I think that it could be discussed separately.\n> \n> OK. However, unless I'm wrong, what I am describing as a desired behavior\n> is the current behavior of XLogArchiveCheckDone. So, we might want to decide if\n> v8 should return false during crash recovery no matter the archive_mode setup,\n> or if we keep the current behavior. I vote for keeping it this way.\n\nI would rather avoid that now, as we don't check explicitely for crash\nrecovery in this code path. And for the purpose of this patch it is\nfine to stick with the extra check on a standby with\n(RECOVERY_STATE_ARCHIVE && archive_mode = always).\n\n> Maybe we could use something more common for all plateform? Eg.:\n> \n> archive_command='this command does not exist'\n> \n> At least, we would have the same error everywhere, as far as it could matter...\n\nYeah. We could try to do with \"false\" as command anyway, and see what\nthe buildfarm thinks. As the test is skipped on Windows, I would\nassume that it does not matter much anyway. Let's see what others\nthink about this piece. I don't have plans to touch again this patch\nuntil likely the middle of next week.\n\n>> Thanks, I have worked more on the test, refactoring pieces related to\n>> the segment names, adjusting some comments and fixing some of the\n>> logic. Note that you introduced something incorrect at the creation\n>> of $standby2 as you have been updating postgresql.conf.auto for\n>> $standby1.\n> \n> erf, last minute quick edit with lack of review on my side :(\n\nNo problem. It happens.\n\n>> I have noticed an extra issue while looking at the backend pieces\n>> today: at the beginning of the REDO loop we forgot one place where\n>> SharedRecoveryState *has* to be updated to a correct state (around\n>> the comment \"Update pg_control to show that we are...\" in xlog.c) as\n>> the startup process may decide to switch the control file state to\n>> DB_IN_ARCHIVE_RECOVERY or DB_IN_CRASH_RECOVERY, but we forgot to\n>> update the new shared flag at this early stage. It did not matter\n>> before because SharedRecoveryInProgress would be only \"true\" for both,\n>> but that's not the case anymore as we need to make the difference\n>> between crash recovery and archive recovery in the new flag. There is\n>> no actual need to update SharedRecoveryState to RECOVERY_STATE_CRASH\n>> as the initial shared memory state is RECOVERY_STATE_CRASH, but\n>> updating the flag makes the code more consistent IMCRASHO so I updated it\n>> anyway in the attached.\n> \n> Grmbl...I had this logic the other way around: init with\n> RECOVERY_STATE_RECOVERY and set to CRASH in this exact if/then/else block... I\n> removed it in v4 when setting XLogCtl->SharedRecoveryState to RECOVERY or CRASH\n> based on ControlFile->state.\n> \n> Sorry, I forgot it after discussing the init value in v5 :(\n\nIndeed. The extra initialization was part of v4, and got removed as\nof v5. Still, it seems to me that this part was not complete without\nupdating the shared memory field correctly at the beginning of the\nREDO processing as the last version of the patch does.\n--\nMichael", "msg_date": "Sat, 18 Apr 2020 18:26:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Sat, 18 Apr 2020 18:26:11 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Apr 17, 2020 at 03:33:04PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > On Fri, 17 Apr 2020 15:50:43 +0900 Michael Paquier <michael@paquier.xyz> wrote:\n> >> Not sure that it is something that matters for this thread though, so\n> >> if necessary I think that it could be discussed separately.\n> > \n> > OK. However, unless I'm wrong, what I am describing as a desired behavior\n> > is the current behavior of XLogArchiveCheckDone. So, we might want to decide if\n> > v8 should return false during crash recovery no matter the archive_mode setup,\n> > or if we keep the current behavior. I vote for keeping it this way.\n> \n> I would rather avoid that now, as we don't check explicitely for crash\n> recovery in this code path. And for the purpose of this patch it is\n> fine to stick with the extra check on a standby with\n> (RECOVERY_STATE_ARCHIVE && archive_mode = always).\n\nThe commit 78ea8b5daa intends that WAL segments are properly removed\non standby with archive_mode=on by not marking .ready. The v7\nactually let such segments be marked .ready, but they are finally\nremoved after entering archive recovery. It preserves the patch's\nintention in that perspective. (I'd rather prefer to distinguish\n\"ArchiveRecoveryRequested\" somehow but it would be more complex and it\nis not the agreement on this thread.)\n\nAs the result, +1 to what v7 is doing and discussing on earlier\nremoval of such WAL segments separately if needed.\n\n\n> > Maybe we could use something more common for all plateform? Eg.:\n> > \n> > archive_command='this command does not exist'\n> > \n> > At least, we would have the same error everywhere, as far as it could matter...\n> \n> Yeah. We could try to do with \"false\" as command anyway, and see what\n> the buildfarm thinks. As the test is skipped on Windows, I would\n> assume that it does not matter much anyway. Let's see what others\n> think about this piece. I don't have plans to touch again this patch\n> until likely the middle of next week.\n\nCouldn't we use \"/\" as a globally-results-in-failure command? But\nthat doesn't increment failed_count. The reason is pgarch_archiveXLog\nexits with FATAL for \"is a directory\" error. The comment asserts that\nwe exit with FATAL for SIGINT or SIGQUIT and if so it is enough to\ncheck only exit-by-signal case. The following fix worked.\n\ndiff --git a/src/backend/postmaster/pgarch.c b/src/backend/postmaster/pgarch.c\nindex 01ffd6513c..def6a68063 100644\n--- a/src/backend/postmaster/pgarch.c\n+++ b/src/backend/postmaster/pgarch.c\n@@ -595,7 +595,7 @@ pgarch_archiveXlog(char *xlog)\n \t\t * \"command not found\" type of error. If we overreact it's no big\n \t\t * deal, the postmaster will just start the archiver again.\n \t\t */\n-\t\tint\t\t\tlev = wait_result_is_any_signal(rc, true) ? FATAL : LOG;\n+\t\tint\t\t\tlev = wait_result_is_any_signal(rc, false) ? FATAL : LOG;\n \n \t\tif (WIFEXITED(rc))\n \t\t{\n\nI didn't tested it on Windows (I somehow broke my repo and it's too\nslow to clone.) but system(\"/\") returned 1 and I think that result\nincrements the counter.\n\n> >> Thanks, I have worked more on the test, refactoring pieces related to\n> >> the segment names, adjusting some comments and fixing some of the\n> >> logic. Note that you introduced something incorrect at the creation\n> >> of $standby2 as you have been updating postgresql.conf.auto for\n> >> $standby1.\n> > \n> > erf, last minute quick edit with lack of review on my side :(\n> \n> No problem. It happens.\n> \n> >> I have noticed an extra issue while looking at the backend pieces\n> >> today: at the beginning of the REDO loop we forgot one place where\n> >> SharedRecoveryState *has* to be updated to a correct state (around\n> >> the comment \"Update pg_control to show that we are...\" in xlog.c) as\n> >> the startup process may decide to switch the control file state to\n> >> DB_IN_ARCHIVE_RECOVERY or DB_IN_CRASH_RECOVERY, but we forgot to\n> >> update the new shared flag at this early stage. It did not matter\n> >> before because SharedRecoveryInProgress would be only \"true\" for both,\n> >> but that's not the case anymore as we need to make the difference\n> >> between crash recovery and archive recovery in the new flag. There is\n> >> no actual need to update SharedRecoveryState to RECOVERY_STATE_CRASH\n> >> as the initial shared memory state is RECOVERY_STATE_CRASH, but\n> >> updating the flag makes the code more consistent IMCRASHO so I updated it\n> >> anyway in the attached.\n> > \n> > Grmbl...I had this logic the other way around: init with\n> > RECOVERY_STATE_RECOVERY and set to CRASH in this exact if/then/else block... I\n> > removed it in v4 when setting XLogCtl->SharedRecoveryState to RECOVERY or CRASH\n> > based on ControlFile->state.\n> > \n> > Sorry, I forgot it after discussing the init value in v5 :(\n> \n> Indeed. The extra initialization was part of v4, and got removed as\n> of v5. Still, it seems to me that this part was not complete without\n> updating the shared memory field correctly at the beginning of the\n> REDO processing as the last version of the patch does.\n\nI may not be following the discussion, but I think it is reasonable\nthat SharedRecoveryState is initialized as CRASH then moves to ARCHIVE\nas needed and finished by NONE. That transition also stables\nRecoveryInProgress().\n\n\nOther minor comments:\n\n+\tRECOVERY_STATE_NONE\t\t\t/* currently in production */\n\nI think it would be better be RECOVERY_STATE_DONE.\n\nBy the way I noticed that RecoveryState is exactly a subset of\nDBState. And changes of SharedRecoveryState happens side-by-side with\nControlFileData->state in most places. Coundn't we just usee\nControlFile->state instead of SharedRecoveryState?\n\n\nBy the way I found a typo.\n\n+# Recovery tests for the achiving with a standby partially check\ns/achiving/archiving/\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 20 Apr 2020 16:02:31 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, Apr 20, 2020 at 04:02:31PM +0900, Kyotaro Horiguchi wrote:\n> At Sat, 18 Apr 2020 18:26:11 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> As the result, +1 to what v7 is doing and discussing on earlier\n> removal of such WAL segments separately if needed.\n\nThanks for the extra review.\n\n>> Yeah. We could try to do with \"false\" as command anyway, and see what\n>> the buildfarm thinks. As the test is skipped on Windows, I would\n>> assume that it does not matter much anyway. Let's see what others\n>> think about this piece. I don't have plans to touch again this patch\n>> until likely the middle of next week.\n> \n> Couldn't we use \"/\" as a globally-results-in-failure command? But\n> that doesn't increment failed_count. The reason is pgarch_archiveXLog\n> exits with FATAL for \"is a directory\" error. The comment asserts that\n> we exit with FATAL for SIGINT or SIGQUIT and if so it is enough to\n> check only exit-by-signal case. The following fix worked.\n\nYeah, I was working on this stuff today and I noticed this problem. I\nwas just going to send an email on the matter with a more portable\npatch and you also just beat me to it with this one :)\n\nSo yes, using \"false\" may be a bad idea because we cannot rely on the\ncase where the command does not exist in an environment in this test.\nAfter more testing, I have been hit hard about the fact that the\narchiver exits immediately if an archive command cannot be found\n(errcode = 127), and it does not report this failure back to\npg_stat_archiver, which would cause the test to wait until the timeout\nof poll_query_until() kills the test. There is however an extra\nmethod not mentioned yet on this thread: we know that cp/copy is\nportable enough per the buildfarm, so let's use a copy command that we\nknow *will* fail. A simple way of doing this is a command where the\norigin file does not exist.\n\n> --- a/src/backend/postmaster/pgarch.c\n> +++ b/src/backend/postmaster/pgarch.c\n> @@ -595,7 +595,7 @@ pgarch_archiveXlog(char *xlog)\n> \t\t * \"command not found\" type of error. If we overreact it's no big\n> \t\t * deal, the postmaster will just start the archiver again.\n> \t\t */\n> -\t\tint\t\t\tlev = wait_result_is_any_signal(rc, true) ? FATAL : LOG;\n> +\t\tint\t\t\tlev = wait_result_is_any_signal(rc, false) ? FATAL : LOG;\n> \n> \t\tif (WIFEXITED(rc))\n> \t\t{\n> \n> I didn't tested it on Windows (I somehow broke my repo and it's too\n> slow to clone.) but system(\"/\") returned 1 and I think that result\n> increments the counter.\n\nNo, this would be a behavior change, which is not acceptable in my\nview. (By the way, just nuke your full repo if it does not work\nanymore on Windows, this method works).\n\n>> Indeed. The extra initialization was part of v4, and got removed as\n>> of v5. Still, it seems to me that this part was not complete without\n>> updating the shared memory field correctly at the beginning of the\n>> REDO processing as the last version of the patch does.\n> \n> I may not be following the discussion, but I think it is reasonable\n> that SharedRecoveryState is initialized as CRASH then moves to ARCHIVE\n> as needed and finished by NONE. That transition also stables\n> RecoveryInProgress().\n\nThought as well about that over the weekend, and that's still the best\noption to me.\n\n> I think it would be better be RECOVERY_STATE_DONE.\n\nI like this suggestion better than the original in v7.\n\n> By the way I noticed that RecoveryState is exactly a subset of\n> DBState. And changes of SharedRecoveryState happens side-by-side with\n> ControlFileData->state in most places. Coundn't we just usee\n> ControlFile->state instead of SharedRecoveryState?\n\nI actually found confusing to use the same thing, because then the\nreader would thing that SharedRecoveryState could be set to more\nvalues but we don't want that.\n\n> By the way I found a typo.\n> \n> +# Recovery tests for the achiving with a standby partially check\n> s/achiving/archiving/\n\nThanks, fixed.\n\nAttached is an updated patch, where I tweaked more comments.\n\nJehan-Guillaume, who is your colleague who found originally about this\nproblem? We should credit him in the commit message.\n--\nMichael", "msg_date": "Mon, 20 Apr 2020 16:34:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, 20 Apr 2020 16:34:44 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n[...]\n> > By the way I noticed that RecoveryState is exactly a subset of\n> > DBState. And changes of SharedRecoveryState happens side-by-side with\n> > ControlFileData->state in most places. Coundn't we just usee\n> > ControlFile->state instead of SharedRecoveryState?\n> \n> I actually found confusing to use the same thing, because then the\n> reader would thing that SharedRecoveryState could be set to more\n> values but we don't want that.\n\nI thought about this while studying various possible fix. \n\nhttps://www.postgresql.org/message-id/flat/20200401181735.11100908%40firost#6192afba4e4549b8d9bac03168bad46b\n\nThe problem is that we would have to read the controldata file each time we\nwonder if a segment should be archived/removed. Moreover, the controldata\nfile might not be in sync quickly enough with the real state for some other\ncode path or futur needs.\n\n[...]\n> Attached is an updated patch, where I tweaked more comments.\n> \n> Jehan-Guillaume, who is your colleague who found originally about this\n> problem? We should credit him in the commit message.\n\nIndeed, Benoît Lobréau reported this behavior to me.\n\nRegards,\n\n\n", "msg_date": "Mon, 20 Apr 2020 14:22:35 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, Apr 20, 2020 at 02:22:35PM +0200, Jehan-Guillaume de Rorthais wrote:\n> The problem is that we would have to read the controldata file each time we\n> wonder if a segment should be archived/removed. Moreover, the controldata\n> file might not be in sync quickly enough with the real state for some other\n> code path or futur needs.\n\nI don't think that this is what Horiguchi-san meant here. What I got\nfrom his previous message would be to be to copy the shared value from\nthe control file when necessary, and have the shared state use only a\nsubset of the existing values of DBState, aka:\n- DB_IN_CRASH_RECOVERY\n- DB_IN_ARCHIVE_RECOVERY\n- DB_IN_PRODUCTION\nStill, that sounds wrong to me because then somebody would be tempted\nto change the shared value thinking that things like DB_SHUTDOWNING,\nDB_SHUTDOWNED_* or DB_STARTUP are valid but we don't want that here.\nNote that there may be a case for DB_STARTUP to be used in\nXLOGShmemInit(), but I'd rather let the code use the safest default,\nDB_IN_CRASH_RECOVERY to control that we won't remove .ready files by\ndefault until the startup process sees fit to do the actual switch\ndepending on the checkpoint record lookup, if archive recovery was\nactually requested, etc.\n\n> Indeed, Benoît Lobréau reported this behavior to me.\n\nNoted. Thanks for the information. I don't think that I have ever\nmet Benoît in person, do I? Tell him that I owe him one beer or a\nbeverage of his choice when we meet IRL, and that he had better use\nthis message-id to make me keep my promise :)\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 11:15:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Tue, 21 Apr 2020 11:15:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Apr 20, 2020 at 02:22:35PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > The problem is that we would have to read the controldata file each time we\n> > wonder if a segment should be archived/removed. Moreover, the controldata\n> > file might not be in sync quickly enough with the real state for some other\n> > code path or futur needs.\n> \n> I don't think that this is what Horiguchi-san meant here. What I got\n> from his previous message would be to be to copy the shared value from\n> the control file when necessary, and have the shared state use only a\n> subset of the existing values of DBState, aka:\n> - DB_IN_CRASH_RECOVERY\n> - DB_IN_ARCHIVE_RECOVERY\n> - DB_IN_PRODUCTION\n\nFirst I thought as above, but I thought that we could use\nControlFile->state itself in this case, by regarding the symbols less\nthan DB_IN_CRASH_RECOVERY as RECOVERY_STATE_CRASH. I don't think\nthere's no problem if the update to DB_IN_ARCHIVE_RECOVERY reaches\ncheckpointer with some delay.\n\n> Still, that sounds wrong to me because then somebody would be tempted\n> to change the shared value thinking that things like DB_SHUTDOWNING,\n> DB_SHUTDOWNED_* or DB_STARTUP are valid but we don't want that here.\n\nThat is not an issue if we just use DBState to know whether we have\nstarted archive recovery.\n\n> Note that there may be a case for DB_STARTUP to be used in\n> XLOGShmemInit(), but I'd rather let the code use the safest default,\n> DB_IN_CRASH_RECOVERY to control that we won't remove .ready files by\n> default until the startup process sees fit to do the actual switch\n> depending on the checkpoint record lookup, if archive recovery was\n> actually requested, etc.\n\nI'm not sure I read this correctly. But I think I agree to this.\n\n+\tif (!XLogArchivingAlways() &&\n+\t\tGetRecoveryState() == RECOVERY_STATE_ARCHIVE)\n\nIs rewritten as \n\n+\tif (!XLogArchivingAlways() &&\n+\t\tGetDBState() > DB_IN_CRASH_RECOVERY)\n\nFWIW, what annoyed me is there are three variables that are quite\nsimilar but has different domains, ControlFile->state,\nXLogCtl->SharedRecoveryState, and LocalRecoveryInProgress. I didn't\nmind there were two, but three seems a bit too many to me.\n\nBut it may be different issue.\n\n> > Indeed, Benoît Lobréau reported this behavior to me.\n> \n> Noted. Thanks for the information. I don't think that I have ever\n> met Benoît in person, do I? Tell him that I owe him one beer or a\n> beverage of his choice when we meet IRL, and that he had better use\n> this message-id to make me keep my promise :)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 21 Apr 2020 12:09:25 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, Apr 21, 2020 at 12:09:25PM +0900, Kyotaro Horiguchi wrote:\n> +\tif (!XLogArchivingAlways() &&\n> +\t\tGetRecoveryState() == RECOVERY_STATE_ARCHIVE)\n> \n> Is rewritten as \n> \n> +\tif (!XLogArchivingAlways() &&\n> +\t\tGetDBState() > DB_IN_CRASH_RECOVERY)\n> \n> FWIW, what annoyed me is there are three variables that are quite\n> similar but has different domains, ControlFile->state,\n> XLogCtl->SharedRecoveryState, and LocalRecoveryInProgress. I didn't\n> mind there were two, but three seems a bit too many to me.\n\nThat's actually the pattern I would avoid for clarity. There is no\nneed to add more dependencies to the entries of DBState for the sake\nof this patch, and this smells like a trap if more values are added to\nit in an order that does not match what we have been assuming in the\ncontext of this thread.\n--\nMichael", "msg_date": "Tue, 21 Apr 2020 13:57:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Tue, 21 Apr 2020 13:57:39 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Apr 21, 2020 at 12:09:25PM +0900, Kyotaro Horiguchi wrote:\n> > +\tif (!XLogArchivingAlways() &&\n> > +\t\tGetRecoveryState() == RECOVERY_STATE_ARCHIVE)\n> > \n> > Is rewritten as \n> > \n> > +\tif (!XLogArchivingAlways() &&\n> > +\t\tGetDBState() > DB_IN_CRASH_RECOVERY)\n> > \n> > FWIW, what annoyed me is there are three variables that are quite\n> > similar but has different domains, ControlFile->state,\n> > XLogCtl->SharedRecoveryState, and LocalRecoveryInProgress. I didn't\n> > mind there were two, but three seems a bit too many to me.\n> \n> That's actually the pattern I would avoid for clarity. There is no\n> need to add more dependencies to the entries of DBState for the sake\n> of this patch, and this smells like a trap if more values are added to\n> it in an order that does not match what we have been assuming in the\n> context of this thread.\n\nYes. Anywaay that would be another issue, if it is an issue.\n\nI'm fine with the current state.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 21 Apr 2020 15:08:17 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, 21 Apr 2020 15:08:17 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Tue, 21 Apr 2020 13:57:39 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in \n> > On Tue, Apr 21, 2020 at 12:09:25PM +0900, Kyotaro Horiguchi wrote: \n> > > +\tif (!XLogArchivingAlways() &&\n> > > +\t\tGetRecoveryState() == RECOVERY_STATE_ARCHIVE)\n> > > \n> > > Is rewritten as \n> > > \n> > > +\tif (!XLogArchivingAlways() &&\n> > > +\t\tGetDBState() > DB_IN_CRASH_RECOVERY)\n> > > \n> > > FWIW, what annoyed me is there are three variables that are quite\n> > > similar but has different domains, ControlFile->state,\n> > > XLogCtl->SharedRecoveryState, and LocalRecoveryInProgress. I didn't\n> > > mind there were two, but three seems a bit too many to me. \n\nIn fact, my original goal [1] was to avoid adding another shared boolean related\nto the same topic. So I understand your feeling.\n\nI played with this idea again, based on your argument that there's no problem\nif the update to DB_IN_ARCHIVE_RECOVERY reaches checkpointer with some delay.\n\nThe other point I feel bad with is to open and check the controlfile again and\nagain for each segment to archive, even on running production instance.\nIt's not that it would be heavy, but it feels overkill to fetch this information\nthat should be available more easily.\n\nThat leads me an idea where we would keep the ControlFile data up-to-date in\nshared memory. There's a few duplicates between ControlFile and XLogCtl, so\nmaybe it could make the code a little simpler at some other places than just\nfixing $SUBJECT using DBState? I'm not sure of the implications and impacts\nthough. This seems way bigger than the current fix and with many traps on the\nway. Maybe we could discuss this in another thread if you think it deserves it?\n\nRegards,\n\n[1]\nhttps://www.postgresql.org/message-id/flat/20200403182625.0fccc6fd%40firost#28a756094a4b1f3dd24927fb6311927d\n\n\n", "msg_date": "Wed, 22 Apr 2020 00:00:22 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Hello,\n\nI did another round of review of v8.\n\n- LocalRecoveryInProgress = xlogctl->SharedRecoveryInProgress;\n+ LocalRecoveryInProgress = (xlogctl->SharedRecoveryState !=\n RECOVERY_STATE_DONE);\n\nDo we need to acquire info_lck to look at the state here, as we do in\nGetRecoveryState()? Why is it missing from previous code where\nSharedRecoveryInProgress was protected by info_lck as well?\n\nPlus, the new line length overflow the 80-column, but I'm not sure where to\nbreak this line.\n\n+if ($Config{osname} eq 'MSWin32')\n+{\n+\n+\t# some Windows Perls at least don't like IPC::Run's start/kill_kill\nregime.\n+\tplan skip_all => \"Test fails on Windows perl\";\n+}\n\nIn fact, this was inherited from 011_crash_recovery.pl where I originally\nadded some tests. As 020_archive_status.pl doesn't use IPC::Run, the comment is\nwrong. But I wonder if this whole block is really needed. Unfortunately I can't\ntest on MSWin32 :/\n\nOn Tue, 21 Apr 2020 11:15:01 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> > Indeed, Benoît Lobréau reported this behavior to me. \n> \n> Noted. Thanks for the information. I don't think that I have ever\n> met Benoît in person, do I?\n\nI don't think so.\n\n> Tell him that I owe him one beer or a beverage of his choice when we meet\n> IRL, and that he had better use this message-id to make me keep my promise :)\n\nI told him (but I'm sure he was reading anyway :)).\n\nRegards,\n\n\n", "msg_date": "Wed, 22 Apr 2020 00:41:21 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, Apr 22, 2020 at 12:00:22AM +0200, Jehan-Guillaume de Rorthais wrote:\n> That leads me an idea where we would keep the ControlFile data up-to-date in\n> shared memory. There's a few duplicates between ControlFile and XLogCtl, so\n> maybe it could make the code a little simpler at some other places than just\n> fixing $SUBJECT using DBState? I'm not sure of the implications and impacts\n> though. This seems way bigger than the current fix and with many traps on the\n> way. Maybe we could discuss this in another thread if you think it deserves it?\n\nIt seems to me that this could have wider applications than just the\nrecovery state, no? I would recommend to keep this discussion on a\nseparate thread to give more visibility to the topic.\n--\nMichael", "msg_date": "Wed, 22 Apr 2020 08:07:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, Apr 22, 2020 at 12:41:21AM +0200, Jehan-Guillaume de Rorthais wrote:\n> Do we need to acquire info_lck to look at the state here, as we do in\n> GetRecoveryState()? Why is it missing from previous code where\n> SharedRecoveryInProgress was protected by info_lck as well?\n\nPlease see 1a3d104.\n\n> Plus, the new line length overflow the 80-column, but I'm not sure where to\n> break this line.\n\npgindent has been run on v8, and it did not complain.\n\n> In fact, this was inherited from 011_crash_recovery.pl where I originally\n> added some tests. As 020_archive_status.pl doesn't use IPC::Run, the comment is\n> wrong. But I wonder if this whole block is really needed. Unfortunately I can't\n> test on MSWin32 :/\n\nYou are right here. The restriction can be removed, and I have\nchecked that the test from v8 is able to pass on my Windows dev VM.\n--\nMichael", "msg_date": "Wed, 22 Apr 2020 10:19:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, Apr 22, 2020 at 10:19:35AM +0900, Michael Paquier wrote:\n> You are right here. The restriction can be removed, and I have\n> checked that the test from v8 is able to pass on my Windows dev VM.\n\nAttached are versions for each branch down to 9.5. While working on\nthe backpatch, I have not found major conflicts except one thing:\nup to 10, Postgres does WAL segment recycling after two completed\ncheckpoints, and the 8th test of the script relies on the behavior of\n11~ of one completed checkpoint (first .ready file present in the cold\nbackup but removed removed from $standby1). I have taken the simplest\napproach to fix the test by checking that the .ready file actually\nexists, while the rest of the test remains the same.\n\nIt is worth noting that for 9.5 and 9.6 the test had compatibility\nissues with the renaming of pg_xlog to pg_wal, including paths and\nfunctions. The calls to poll_query_until() also needed tweaks, but\nI got the tests to work.\n--\nMichael", "msg_date": "Wed, 22 Apr 2020 16:32:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 22 Apr 2020 10:19:35 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Apr 22, 2020 at 12:41:21AM +0200, Jehan-Guillaume de Rorthais wrote:\n> > Do we need to acquire info_lck to look at the state here, as we do in\n> > GetRecoveryState()? Why is it missing from previous code where\n> > SharedRecoveryInProgress was protected by info_lck as well? \n> \n> Please see 1a3d104.\n\nUnderstood. Interesting. Thank you.\n\n> > Plus, the new line length overflow the 80-column, but I'm not sure where to\n> > break this line. \n> \n> pgindent has been run on v8, and it did not complain.\n\nOK.\n\n> > In fact, this was inherited from 011_crash_recovery.pl where I originally\n> > added some tests. As 020_archive_status.pl doesn't use IPC::Run, the\n> > comment is wrong. But I wonder if this whole block is really needed.\n> > Unfortunately I can't test on MSWin32 :/ \n> \n> You are right here. The restriction can be removed, and I have\n> checked that the test from v8 is able to pass on my Windows dev VM.\n\nThanks!\n\n\n", "msg_date": "Wed, 22 Apr 2020 16:14:20 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 22 Apr 2020 16:32:23 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Wed, Apr 22, 2020 at 10:19:35AM +0900, Michael Paquier wrote:\n> > You are right here. The restriction can be removed, and I have\n> > checked that the test from v8 is able to pass on my Windows dev VM. \n> \n> Attached are versions for each branch down to 9.5. While working on\n> the backpatch, I have not found major conflicts except one thing:\n> up to 10, Postgres does WAL segment recycling after two completed\n> checkpoints, and the 8th test of the script relies on the behavior of\n> 11~ of one completed checkpoint (first .ready file present in the cold\n> backup but removed removed from $standby1). I have taken the simplest\n> approach to fix the test by checking that the .ready file actually\n> exists, while the rest of the test remains the same.\n\nThis test seems useless to me. It should either be removed or patched to test\nthe signal has been removed after a second restartpoint.\n\nPlease, find in attachment a patch for 9.6 implementing this. If it seems\nreasonable to you, I can create the backpatch to 9.5.\n\n> It is worth noting that for 9.5 and 9.6 the test had compatibility\n> issues with the renaming of pg_xlog to pg_wal, including paths and\n> functions. The calls to poll_query_until() also needed tweaks, but\n> I got the tests to work.\n\nThanks for the backpatching work!\n\nRegards,", "msg_date": "Wed, 22 Apr 2020 17:58:24 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, 22 Apr 2020 17:58:24 +0200\nJehan-Guillaume de Rorthais <jgdr@dalibo.com> wrote:\n\n> On Wed, 22 Apr 2020 16:32:23 +0900\n> Michael Paquier <michael@paquier.xyz> wrote:\n> \n> [...] \n> [...] \n> [...] \n> \n> This test seems useless to me. It should either be removed or patched to test\n> the signal has been removed after a second restartpoint.\n> \n> Please, find in attachment a patch for 9.6 implementing this. If it seems\n> reasonable to you, I can create the backpatch to 9.5.\n\nI found an extra useless line of code in v9 patch. Please, find in\nattachment v10. Sorry for this.\n\nRegards,", "msg_date": "Wed, 22 Apr 2020 18:17:17 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Wed, Apr 22, 2020 at 06:17:17PM +0200, Jehan-Guillaume de Rorthais wrote:\n> I found an extra useless line of code in v9 patch. Please, find in\n> attachment v10. Sorry for this.\n\nThanks for helping here, your changes make sense. This looks mostly\nfine to me except that part:\n+$standby1->poll_query_until('postgres',\n+ qq{ SELECT pg_xlog_location_diff('$primary_lsn', pg_last_xlog_replay_location()) = 0 })\n+ or die \"Timed out while waiting for xlog replay\";\nHere we should check if $primary_lsn is at least\npg_last_xlog_replay_location(). Checking for an equality may stuck\nthe test if more WAL gets replayed. For example you could have a\nconcurrent autovacuum generating WAL.\n--\nMichael", "msg_date": "Thu, 23 Apr 2020 08:46:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "FWIW, the test for 10- looks fine, but I have one qustion.\n\n+ \t'archive success reported in pg_stat_archiver for WAL segment $segment_name_\n\nThis seems intending to show an actual segment name in the message,\nbut it is really shown as \"... WAL segment $segment_name_1\". Is that\nintended?\n\nAt Thu, 23 Apr 2020 08:46:18 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Apr 22, 2020 at 06:17:17PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > I found an extra useless line of code in v9 patch. Please, find in\n> > attachment v10. Sorry for this.\n> \n> Thanks for helping here, your changes make sense. This looks mostly\n> fine to me except that part:\n> +$standby1->poll_query_until('postgres',\n> + qq{ SELECT pg_xlog_location_diff('$primary_lsn', pg_last_xlog_replay_location()) = 0 })\n> + or die \"Timed out while waiting for xlog replay\";\n> Here we should check if $primary_lsn is at least\n> pg_last_xlog_replay_location(). Checking for an equality may stuck\n> the test if more WAL gets replayed. For example you could have a\n> concurrent autovacuum generating WAL.\n\nAutovacuum is turned off in this case, but anyway other kinds of WAL\nrecords can be generated.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 23 Apr 2020 14:05:46 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, 23 Apr 2020 14:05:46 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> FWIW, the test for 10- looks fine, but I have one qustion.\n> \n> + \t'archive success reported in pg_stat_archiver for WAL segment\n> $segment_name_\n> \n> This seems intending to show an actual segment name in the message,\n> but it is really shown as \"... WAL segment $segment_name_1\". Is that\n> intended?\n\nGood catch. Fixed in v11. Thank you.\n\n> At Thu, 23 Apr 2020 08:46:18 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in \n> > On Wed, Apr 22, 2020 at 06:17:17PM +0200, Jehan-Guillaume de Rorthais\n> > wrote: \n> > > I found an extra useless line of code in v9 patch. Please, find in\n> > > attachment v10. Sorry for this. \n> > \n> > Thanks for helping here, your changes make sense. This looks mostly\n> > fine to me except that part:\n> > +$standby1->poll_query_until('postgres',\n> > + qq{ SELECT pg_xlog_location_diff('$primary_lsn',\n> > pg_last_xlog_replay_location()) = 0 })\n> > + or die \"Timed out while waiting for xlog replay\";\n> > Here we should check if $primary_lsn is at least\n> > pg_last_xlog_replay_location(). Checking for an equality may stuck\n> > the test if more WAL gets replayed. For example you could have a\n> > concurrent autovacuum generating WAL. \n> \n> Autovacuum is turned off in this case, but anyway other kinds of WAL\n> records can be generated.\n\nmake sense. Fixed in v11.\n\nPlease, find in v11 for version 9.5, 9.6 and 10.\n\nRegards,", "msg_date": "Thu, 23 Apr 2020 18:59:53 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, Apr 23, 2020 at 06:59:53PM +0200, Jehan-Guillaume de Rorthais wrote:\n> Please, find in v11 for version 9.5, 9.6 and 10.\n\nI have worked more on that using your v11, tweaked few comments in the\nnew test parts for 9.5~10, and applied the whole. Thanks all for your\nwork. I am keeping now an eye on the buildfarm.\n--\nMichael", "msg_date": "Fri, 24 Apr 2020 08:54:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Hi,\n\nOn 2020-04-24 08:54:02 +0900, Michael Paquier wrote:\n> On Thu, Apr 23, 2020 at 06:59:53PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > Please, find in v11 for version 9.5, 9.6 and 10.\n>\n> I have worked more on that using your v11, tweaked few comments in the\n> new test parts for 9.5~10, and applied the whole. Thanks all for your\n> work. I am keeping now an eye on the buildfarm.\n\nI am confused by this commit. You added shared memory to differentiate\nbetween crash recovery and standby mode/archive recovery, correct? You\nwrite:\n\n> This commit fixes the regression by tracking in shared memory if a live\n> cluster is either in crash recovery or archive recovery as the handling\n> of WAL segments ready to be archived is different in both cases (those\n> WAL segments should not be removed during crash recovery), and by using\n> this new shared memory state to decide if a segment can be recycled or\n> not.\n\nBut don't we pretty much know this already from the state of the system?\nDuring crash recovery there's nothing running RemoveOldXLogFiles() but\nthe startup process. Right? And in DB_IN_ARCHIVE_RECOVERY it should only\nhappen as part of restartpoints (i.e. the checkpointer).\n\nDid you add the new shared state to avoid deducing things from the\n\"environment\"? If so, it should really be mentioned in the commit\nmessage & code. Because:\n\n> Previously, it was not possible to know if a cluster was in crash\n> recovery or archive recovery as the shared state was able to track only\n> if recovery was happening or not, leading to the problem.\n\nreally doesn't make that obvious.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Apr 2020 18:48:56 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> I have worked more on that using your v11, tweaked few comments in the\n> new test parts for 9.5~10, and applied the whole. Thanks all for your\n> work. I am keeping now an eye on the buildfarm.\n\nLooks like the news is not good :-(\n\nI see that my own florican is one of the failing critters, though\nit failed only on HEAD which seems odd. Any suggestions what to\nlook for?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 23 Apr 2020 22:21:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, Apr 23, 2020 at 10:21:15PM -0400, Tom Lane wrote:\n> Looks like the news is not good :-(\n\nYes, I was looking at that for the last couple of hours, and just\npushed something to put back the buildfarm to a green state for now\n(based on the first results things seem stable now) by removing the\ndefective subset of tests.\n\n> I see that my own florican is one of the failing critters, though\n> it failed only on HEAD which seems odd. Any suggestions what to\n> look for?\n\nThe issue comes from the parts of the test where we expect some .ready\nfiles to exist (or not) after triggering a restartpoint to force some\nsegments to be recycled. And looking more at it, I suspect that the\nissue is actually that we don't make sure in the test that the\nstandbys started have replayed up to the segment switch record\ntriggered on the primary (the one within generate_series(10,20)), and\nthen the follow-up restart point does not actually recycle the\nsegments we expect to recycle. That's more likely going to be a\nproblem on slower machines as the window gets wider between the moment\nthe standbys reach their consistency point and the moment the switch\nrecord is replayed.\n--\nMichael", "msg_date": "Fri, 24 Apr 2020 12:43:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Thu, Apr 23, 2020 at 06:48:56PM -0700, Andres Freund wrote:\n> But don't we pretty much know this already from the state of the system?\n> During crash recovery there's nothing running RemoveOldXLogFiles() but\n> the startup process. Right? And in DB_IN_ARCHIVE_RECOVERY it should only\n> happen as part of restartpoints (i.e. the checkpointer).\n> \n> Did you add the new shared state to avoid deducing things from the\n> \"environment\"? If so, it should really be mentioned in the commit\n> message & code. Because:\n\nHmm. Sorry, I see your point. The key of the logic here is from\nXLogArchiveCheckDone() which could be called from other processes than\nthe startup process. There is one code path at the end of a base\nbackup for backup history files where not using a shared state would\nbe a problem.\n--\nMichael", "msg_date": "Fri, 24 Apr 2020 16:05:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, 24 Apr 2020 12:43:51 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Apr 23, 2020 at 10:21:15PM -0400, Tom Lane wrote:\n> > Looks like the news is not good :-( \n> \n> Yes, I was looking at that for the last couple of hours, and just\n> pushed something to put back the buildfarm to a green state for now\n> (based on the first results things seem stable now) by removing the\n> defective subset of tests.\n> \n> > I see that my own florican is one of the failing critters, though\n> > it failed only on HEAD which seems odd. Any suggestions what to\n> > look for? \n> \n> The issue comes from the parts of the test where we expect some .ready\n> files to exist (or not) after triggering a restartpoint to force some\n> segments to be recycled. And looking more at it, I suspect that the\n> issue is actually that we don't make sure in the test that the\n> standbys started have replayed up to the segment switch record\n> triggered on the primary (the one within generate_series(10,20)), and\n> then the follow-up restart point does not actually recycle the\n> segments we expect to recycle. That's more likely going to be a\n> problem on slower machines as the window gets wider between the moment\n> the standbys reach their consistency point and the moment the switch\n> record is replayed.\n\nIndeed.\n\nIn regard with your fix, as we don't know if the standby caught up with the\nlatest available record, there's really no point to keep this test either:\n\n # Recovery with archive_mode=on should not create .ready files.\n # Note that this segment did not exist in the backup.\n ok( !-f \"$standby1_data/$segment_path_2_ready\",\n \".ready file for WAL segment $segment_name_2 not created on standby\n when archive_mode=on on standby\" );\n\nI agree the three tests could be removed as they were not covering the bug we\nwere chasing. However, they might still be useful to detect futur non expected\nbehavior changes. If you agree with this, please, find in attachment a patch\nproposal against HEAD that recreate these three tests **after** a waiting loop\non both standby1 and standby2. This waiting loop is inspired from the tests in\n9.5 -> 10.\n\nRegards,", "msg_date": "Fri, 24 Apr 2020 15:03:00 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Fri, Apr 24, 2020 at 03:03:00PM +0200, Jehan-Guillaume de Rorthais wrote:\n> I agree the three tests could be removed as they were not covering the bug we\n> were chasing. However, they might still be useful to detect futur non expected\n> behavior changes. If you agree with this, please, find in attachment a patch\n> proposal against HEAD that recreate these three tests **after** a waiting loop\n> on both standby1 and standby2. This waiting loop is inspired from the tests in\n> 9.5 -> 10.\n\nFWIW, I would prefer keeping all three tests as well.\n\nSo.. I have spent more time on this problem and mereswin here is a\nvery good sample because it failed all three tests:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2020-04-24%2006%3A03%3A53\n\nFor standby2, we get this failure:\nok 11 - .ready file for WAL segment 000000010000000000000001 existing\n in backup is kept with archive_mode=always on standby\nnot ok 12 - .ready file for WAL segment 000000010000000000000002\n created with archive_mode=always on standby\n\nThen, looking at 020_archive_status_standby2.log, we have the\nfollowing logs:\n2020-04-24 02:08:32.032 PDT [9841:3] 020_archive_status.pl LOG:\nstatement: CHECKPOINT\n[...]\n2020-04-24 02:08:32.303 PDT [9821:7] LOG: restored log file\n\"000000010000000000000002\" from archive\n\nIn this case, the test forced a checkpoint to test the segment\nrecycling *before* the extra restored segment we'd like to work on was\nactually restored. So it looks like my initial feeling about the\ntiming issue was right, and I am also able to reproduce the original\nset of failures by adding a manual sleep to delay restores of\nsegments, like that for example:\n--- a/src/backend/access/transam/xlogarchive.c\n+++ b/src/backend/access/transam/xlogarchive.c\n@@ -74,6 +74,8 @@ RestoreArchivedFile(char *path, const char *xlogfname,\n if (recoveryRestoreCommand == NULL ||\n strcmp(recoveryRestoreCommand, \"\") == 0)\n goto not_available;\n\n+ pg_usleep(10 * 1000000); /* 10s */\n+\n /*\n\nWith your patch the problem does not show up anymore even with the\ndelay added, so I would like to apply what you have sent and add back\nthose tests. For now, I would just patch HEAD though as that's not\nworth the risk of destabilizing stable branches in the buildfarm.\n\n> $primary->poll_query_until('postgres',\n> \tq{SELECT archived_count FROM pg_stat_archiver}, '1')\n> - or die \"Timed out while waiting for archiving to finish\";\n> +\tor die \"Timed out while waiting for archiving to finish\";\n\nSome noise in the patch. This may have come from some unfinished\nbusiness with pgindent.\n--\nMichael", "msg_date": "Mon, 27 Apr 2020 16:49:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "At Mon, 27 Apr 2020 16:49:45 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Fri, Apr 24, 2020 at 03:03:00PM +0200, Jehan-Guillaume de Rorthais wrote:\n> > I agree the three tests could be removed as they were not covering the bug we\n> > were chasing. However, they might still be useful to detect futur non expected\n> > behavior changes. If you agree with this, please, find in attachment a patch\n> > proposal against HEAD that recreate these three tests **after** a waiting loop\n> > on both standby1 and standby2. This waiting loop is inspired from the tests in\n> > 9.5 -> 10.\n> \n> FWIW, I would prefer keeping all three tests as well.\n> \n> So.. I have spent more time on this problem and mereswin here is a\n> very good sample because it failed all three tests:\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mereswine&dt=2020-04-24%2006%3A03%3A53\n> \n> For standby2, we get this failure:\n> ok 11 - .ready file for WAL segment 000000010000000000000001 existing\n> in backup is kept with archive_mode=always on standby\n> not ok 12 - .ready file for WAL segment 000000010000000000000002\n> created with archive_mode=always on standby\n> \n> Then, looking at 020_archive_status_standby2.log, we have the\n> following logs:\n> 2020-04-24 02:08:32.032 PDT [9841:3] 020_archive_status.pl LOG:\n> statement: CHECKPOINT\n> [...]\n> 2020-04-24 02:08:32.303 PDT [9821:7] LOG: restored log file\n> \"000000010000000000000002\" from archive\n> \n> In this case, the test forced a checkpoint to test the segment\n> recycling *before* the extra restored segment we'd like to work on was\n> actually restored. So it looks like my initial feeling about the\n> timing issue was right, and I am also able to reproduce the original\n> set of failures by adding a manual sleep to delay restores of\n> segments, like that for example:\n> --- a/src/backend/access/transam/xlogarchive.c\n> +++ b/src/backend/access/transam/xlogarchive.c\n> @@ -74,6 +74,8 @@ RestoreArchivedFile(char *path, const char *xlogfname,\n> if (recoveryRestoreCommand == NULL ||\n> strcmp(recoveryRestoreCommand, \"\") == 0)\n> goto not_available;\n> \n> + pg_usleep(10 * 1000000); /* 10s */\n> +\n> /*\n> \n> With your patch the problem does not show up anymore even with the\n> delay added, so I would like to apply what you have sent and add back\n> those tests. For now, I would just patch HEAD though as that's not\n> worth the risk of destabilizing stable branches in the buildfarm.\n\nAgreed to the diagnosis and the fix. The fix reliably cause a restart\npoint then the restart point manipulats the status files the right way\nbefore the CHECKPOINT command resturns, in the both cases.\n\nIf I would add something to the fix, the following line may need a\ncomment.\n\n+# Wait for the checkpoint record is replayed so that the following\n+# CHECKPOINT causes a restart point reliably.\n|+$standby1->poll_query_until('postgres',\n|+\tqq{ SELECT pg_wal_lsn_diff(pg_last_wal_replay_lsn(), '$primary_lsn') >= 0 }\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 27 Apr 2020 18:21:07 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, 27 Apr 2020 18:21:07 +0900 (JST)\nKyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n\n> At Mon, 27 Apr 2020 16:49:45 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in \n[...]\n> > With your patch the problem does not show up anymore even with the\n> > delay added, so I would like to apply what you have sent and add back\n> > those tests. For now, I would just patch HEAD though as that's not\n> > worth the risk of destabilizing stable branches in the buildfarm.\n\nGood for me. Thanks!\n\n> Agreed to the diagnosis and the fix. The fix reliably cause a restart\n> point then the restart point manipulats the status files the right way\n> before the CHECKPOINT command resturns, in the both cases.\n> \n> If I would add something to the fix, the following line may need a\n> comment.\n> \n> +# Wait for the checkpoint record is replayed so that the following\n> +# CHECKPOINT causes a restart point reliably.\n> |+$standby1->poll_query_until('postgres',\n> |+\tqq{ SELECT pg_wal_lsn_diff(pg_last_wal_replay_lsn(),\n> '$primary_lsn') >= 0 }\n\nAgree.\n\nRegards,\n\n\n", "msg_date": "Mon, 27 Apr 2020 12:42:47 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Mon, Apr 27, 2020 at 06:21:07PM +0900, Kyotaro Horiguchi wrote:\n> Agreed to the diagnosis and the fix. The fix reliably cause a restart\n> point then the restart point manipulats the status files the right way\n> before the CHECKPOINT command resturns, in the both cases.\n\nThanks for checking!\n\n> If I would add something to the fix, the following line may need a\n> comment.\n> \n> +# Wait for the checkpoint record is replayed so that the following\n> +# CHECKPOINT causes a restart point reliably.\n> |+$standby1->poll_query_until('postgres',\n> |+\tqq{ SELECT pg_wal_lsn_diff(pg_last_wal_replay_lsn(), '$primary_lsn') >= 0 }\n\nMakes sense, added a comment and applied to HEAD. I have also\nimproved the comment around the split with pg_switch_wal(), and\nactually simplified the test to use as wait point the return value\nfrom the function.\n--\nMichael", "msg_date": "Tue, 28 Apr 2020 08:01:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, 28 Apr 2020 08:01:38 +0900\nMichael Paquier <michael@paquier.xyz> wrote:\n\n> On Mon, Apr 27, 2020 at 06:21:07PM +0900, Kyotaro Horiguchi wrote:\n> > Agreed to the diagnosis and the fix. The fix reliably cause a restart\n> > point then the restart point manipulats the status files the right way\n> > before the CHECKPOINT command resturns, in the both cases. \n> \n> Thanks for checking!\n> \n> > If I would add something to the fix, the following line may need a\n> > comment.\n> > \n> > +# Wait for the checkpoint record is replayed so that the following\n> > +# CHECKPOINT causes a restart point reliably.\n> > |+$standby1->poll_query_until('postgres',\n> > |+\tqq{ SELECT pg_wal_lsn_diff(pg_last_wal_replay_lsn(),\n> > '$primary_lsn') >= 0 } \n> \n> Makes sense, added a comment and applied to HEAD. I have also\n> improved the comment around the split with pg_switch_wal(), and\n> actually simplified the test to use as wait point the return value\n> from the function.\n\nThank you guys for your help, reviews and commits!\n\n\n", "msg_date": "Tue, 28 Apr 2020 01:58:47 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" }, { "msg_contents": "On Tue, Apr 28, 2020 at 01:58:47AM +0200, Jehan-Guillaume de Rorthais wrote:\n> Thank you guys for your help, reviews and commits!\n\nThe buildfarm does not complain after the latest commit, meaning that\nwe are good here. Thanks for your efforts.\n--\nMichael", "msg_date": "Tue, 28 Apr 2020 09:41:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [BUG] non archived WAL removed during production crash recovery" } ]
[ { "msg_contents": "Hello hackers!\n\nI am a participants in GSoC this year. I’m interested in the development of performance farm benchmarks and website. In the proposal, I talked about how to resolve current issues, integrating social authentication in the project, and migration to later Django version for long support.\n\nThis is my first time participating in GSoC, and I hope you can share your ideas and enjoy reading it\n\nYou can find the proposal in the attachment.\n\nBest,\nPengyu", "msg_date": "Tue, 31 Mar 2020 16:29:45 +0000", "msg_from": "Zhou Pengyu <Ericzpy1998@outlook.com>", "msg_from_op": true, "msg_subject": "2020 GSoC Proposal: Performance Farm Benchmarks and Website\n Development" } ]
[ { "msg_contents": "Hi,\nbringetbitmap function returns int64 value, but internally uses int.\nFor prevent future bugs, fix to right type.\n\nbest regards,\n\nRanier Vilela", "msg_date": "Tue, 31 Mar 2020 14:16:56 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Fix type var declaration (src/backend/access/brin/brin.c)" } ]
[ { "msg_contents": "Hi,\n\nI am trying to change the snapshot too old infrastructure so it\ncooperates with my snapshot scalability patch. While trying to\nunderstand the code sufficiently, I think I found a fairly serious\nissue:\n\nTo map the time-based old_snapshot_threshold to an xid that can be used\nas a cutoff for heap_page_prune(), we maintain a ringbuffer of\nold_snapshot_threshold + 10 entries in\noldSnapshotControlData->xid_by_minute[]. TransactionIdLimitedForOldSnapshots()\nuses that to (at least that's the goal) increase the horizon used for\npruning.\n\nThe problem is that there's no protection again the xids in the\nringbuffer getting old enough to wrap around. Given that practical uses\nof old_snapshot_threshold are likely to be several hours to several\ndays, that's not particularly hard to hit.\n\nThat then has the consequence that we can use an xid that's either from\n\"from the future\" (if bigger than the current xid), or more recent than\nappropriate (if it wrapped far enough to be below nextxid, but not yet\nolder than OldestXmin) as the OldestXmin argument to heap_page_prune().\n\nWhich in turn means that we can end up pruning much more recently\nremoved rows than intended.\n\n\nWhile that'd be bad on its own, the big problem is that we won't detect\nthat case on access, in contrast to the way old_snapshot_threshold is\nintended to work. The reason for that is detecting these errors happens\non the basis of timestamps - which obviously do not wrap around.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Mar 2020 21:53:04 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "wraparound dangers in snapshot too old " }, { "msg_contents": "Hi,\n\nOn 2020-03-31 21:53:04 -0700, Andres Freund wrote:\n> I am trying to change the snapshot too old infrastructure so it\n> cooperates with my snapshot scalability patch. While trying to\n> understand the code sufficiently, I think I found a fairly serious\n> issue:\n\nI accidentally sent this email, I was intending to instead only send\nhttps://www.postgresql.org/message-id/20200401064008.qob7bfnnbu4w5cw4%40alap3.anarazel.de\n\nIt started to happen because of some default keybinding changes between\nmutt / emacs / magit, leading to emacs sending a buffer as an email,\nwhich I didn't intend.\n\nSorry for that,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 09:13:54 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wraparound dangers in snapshot too old" } ]
[ { "msg_contents": "Hi,\n\nSorry, this mail is somewhat long. But I think it's important that at\nleast a few committers read it, since I think we're going to have to\nmake some sort of call about what to do.\n\n\nI am trying to change the snapshot too old infrastructure so it\ncooperates with my snapshot scalability patch. While trying to\nunderstand the code sufficiently, I think I found a fairly serious\nissue:\n\nTo map the time-based old_snapshot_threshold to an xid that can be used\nas a cutoff for heap_page_prune(), we maintain a ringbuffer of\nold_snapshot_threshold + 10 entries in\noldSnapshotControlData->xid_by_minute[]. TransactionIdLimitedForOldSnapshots()\nuses that to (at least that's the goal) increase the horizon used for\npruning.\n\nThe problem is that there's no protection again the xids in the\nringbuffer getting old enough to wrap around. Given that practical uses\nof old_snapshot_threshold are likely to be several hours to several\ndays, that's not particularly hard to hit.\n\nThat then has the consequence that we can use an xid that's either from\n\"from the future\" (if bigger than the current xid), or more recent than\nappropriate (if it wrapped far enough to be below nextxid, but not yet\nolder than OldestXmin) as the OldestXmin argument to heap_page_prune().\n\nWhich in turn means that we can end up pruning much more recently\nremoved rows than intended.\n\nWhile that'd be bad on its own, the big problem is that we won't detect\nthat case on access, in contrast to the way old_snapshot_threshold is\nintended to work. The reason for that is detecting these errors happens\non the basis of timestamps - which obviously do not wrap around.\n\n\nThis made me want to try to reproduce the problem to at least some\ndegree. But I hit another wall: I can't make head or tails out of the\nvalues in the xid_by_minute[] mapping.\n\n\nI added some debug output to print the mapping before/after changes by\nMaintainOldSnapshotTimeMapping() (note that I used timestamps relative\nto the server start in minutes/seconds to make it easier to interpret).\n\nAnd the output turns out to be something like:\n\nWARNING: old snapshot mapping at \"before update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n entry 0 (ring 0): min 7: xid 582921233\n entry 1 (ring 1): min 8: xid 654154155\n entry 2 (ring 2): min 9: xid 661972949\n entry 3 (ring 3): min 10: xid 666899382\n entry 4 (ring 4): min 11: xid 644169619\n entry 5 (ring 5): min 12: xid 644169619\n entry 6 (ring 6): min 13: xid 644169619\n entry 7 (ring 7): min 14: xid 644169619\n\nWARNING: head 420 s: updating existing bucket 4 for sec 660 with xmin 666899382\n\nWARNING: old snapshot mapping at \"after update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n entry 0 (ring 0): min 7: xid 582921233\n entry 1 (ring 1): min 8: xid 654154155\n entry 2 (ring 2): min 9: xid 661972949\n entry 3 (ring 3): min 10: xid 666899382\n entry 4 (ring 4): min 11: xid 666899382\n entry 5 (ring 5): min 12: xid 644169619\n entry 6 (ring 6): min 13: xid 644169619\n entry 7 (ring 7): min 14: xid 644169619\n\nIt's pretty obvious that the xids don't make a ton of sense, I think:\nThey're not monotonically ordered. The same values exist multiple times,\ndespite xids being constantly used. Also, despite the ringbuffer\nsupposedly having 15 entries (that's snapshot_too_old = 5min + the 10 we\nalways add), and the workload having run for 14min, we only have 8\nentries.\n\nThen a bit later we get:\n\nWARNING: old snapshot mapping at \"before update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n entry 0 (ring 0): min 7: xid 582921233\n entry 1 (ring 1): min 8: xid 654154155\n entry 2 (ring 2): min 9: xid 661972949\n entry 3 (ring 3): min 10: xid 666899382\n entry 4 (ring 4): min 11: xid 666899382\n entry 5 (ring 5): min 12: xid 666899382\n entry 6 (ring 6): min 13: xid 666899382\n entry 7 (ring 7): min 14: xid 666899382\n\nWARNING: head 420 s: filling 8 buckets starting at 0 for sec 900 with xmin 666899382\nWARNING: old snapshot mapping at \"after update\" with head ts: 15, current entries: 15 max entries: 15, offset: 1\n entry 0 (ring 1): min 15: xid 654154155\n entry 1 (ring 2): min 16: xid 661972949\n entry 2 (ring 3): min 17: xid 666899382\n entry 3 (ring 4): min 18: xid 666899382\n entry 4 (ring 5): min 19: xid 666899382\n entry 5 (ring 6): min 20: xid 666899382\n entry 6 (ring 7): min 21: xid 666899382\n entry 7 (ring 8): min 22: xid 666899382\n entry 8 (ring 9): min 23: xid 666899382\n entry 9 (ring 10): min 24: xid 666899382\n entry 10 (ring 11): min 25: xid 666899382\n entry 11 (ring 12): min 26: xid 666899382\n entry 12 (ring 13): min 27: xid 666899382\n entry 13 (ring 14): min 28: xid 666899382\n entry 14 (ring 0): min 29: xid 666899382\n\n\nAt a later point we then enter the \"Advance is so far that all old data\nis junk; start over.\" branch, and just reset the whole mapping:\n entry 0 (ring 0): min 30: xid 866711525\n\n\nThe problem, as far as I can tell, is that\noldSnapshotControl->head_timestamp appears to be intended to be the\noldest value in the ring. But we update it unconditionally in the \"need\na new bucket, but it might not be the very next one\" branch of\nMaintainOldSnapshotTimeMapping().\n\nWhile there's not really a clear-cut comment explaining whether\nhead_timestamp() is intended to be the oldest or the newest timestamp,\nit seems to me that the rest of the code treats it as the \"oldest\"\ntimestamp.\n\nTransactionId\nTransactionIdLimitedForOldSnapshots(TransactionId recentXmin,\n Relation relation)\n...\n ts = AlignTimestampToMinuteBoundary(ts)\n - (old_snapshot_threshold * USECS_PER_MINUTE);\n...\n LWLockAcquire(OldSnapshotTimeMapLock, LW_SHARED);\n\n if (oldSnapshotControl->count_used > 0\n && ts >= oldSnapshotControl->head_timestamp)\n {\n int offset;\n\n offset = ((ts - oldSnapshotControl->head_timestamp)\n / USECS_PER_MINUTE);\n if (offset > oldSnapshotControl->count_used - 1)\n offset = oldSnapshotControl->count_used - 1;\n offset = (oldSnapshotControl->head_offset + offset)\n % OLD_SNAPSHOT_TIME_MAP_ENTRIES;\n xlimit = oldSnapshotControl->xid_by_minute[offset];\n\n if (NormalTransactionIdFollows(xlimit, recentXmin))\n SetOldSnapshotThresholdTimestamp(ts, xlimit);\n }\n\n LWLockRelease(OldSnapshotTimeMapLock);\n\nSo we wind ts back by old_snapshot_threshold minutes. Then check that\nthat's still newer than oldSnapshotControl->head_timestamp - which\nclearly can't be the case if it were the newest ts. And as far as I can\ntell the indexing code also only makes sense if head_timestamp is the\noldest timestamp.\n\n\nThis would mean that most cases the old_snapshot_threshold feature is\nactive it would cause corruption: We'd not trigger errors on access,\nbecause the timestamp set with SetOldSnapshotThresholdTimestamp() would\nnot actually match the xids used to limit. But:\n\nIt turns out to be somewhat hard to get\nTransactionIdLimitedForOldSnapshots() to actually do something. Because\noldSnapshotControl->head_timestamp is updated much more often than it\nshould, the ts >= oldSnapshotControl->head_timestamp condition will\noften prevent the limiting code from being hit.\n\nBut it's not very unlikely either. Due to the update of head_timestamp\nto the current timestamp, we'll enter the \"existing mapping; advance xid\nif possible\" branch for up to OLD_SNAPSHOT_TIME_MAP_ENTRIES times. Which\nmeans we can hit it for\n/*\n * The structure used to map times to TransactionId values for the \"snapshot\n * too old\" feature must have a few entries at the tail to hold old values;\n * otherwise the lookup will often fail and the expected early pruning or\n * vacuum will not usually occur. It is best if this padding is for a number\n * of minutes greater than a thread would normally be stalled, but it's OK if\n * early vacuum opportunities are occasionally missed, so there's no need to\n * use an extreme value or get too fancy. 10 minutes seems plenty.\n */\n#define OLD_SNAPSHOT_PADDING_ENTRIES 10\n#define OLD_SNAPSHOT_TIME_MAP_ENTRIES (old_snapshot_threshold + OLD_SNAPSHOT_PADDING_ENTRIES)\n\n10 minutes, I think. There's some other ways too, but they're much less\nlikely.\n\nNote that once the issue has been hit once, future\nSetOldSnapshotThresholdTimestamp() calls that don't hit those 10 minutes\nwill also return a corrupted horizon, because\noldSnapshotControl->threshold_xid will have the wrong value, which then\nwill be used:\n\t\t/*\n\t\t * Failsafe protection against vacuuming work of active transaction.\n\t\t *\n\t\t * This is not an assertion because we avoid the spinlock for\n\t\t * performance, leaving open the possibility that xlimit could advance\n\t\t * and be more current; but it seems prudent to apply this limit. It\n\t\t * might make pruning a tiny bit less aggressive than it could be, but\n\t\t * protects against data loss bugs.\n\t\t */\n\t\tif (TransactionIdIsNormal(latest_xmin)\n\t\t\t&& TransactionIdPrecedes(latest_xmin, xlimit))\n\t\t\txlimit = latest_xmin;\n\n\t\tif (NormalTransactionIdFollows(xlimit, recentXmin))\n\t\t\treturn xlimit;\n\n\n\nAs far as I can tell, this code has been wrong since the feature has\nbeen committed. The tests don't show a problem, because none of this\ncode is reached when old_snapshot_threshold = 0 (which has no real world\nuse, it's purely for testing).\n\n\nI really don't know what to do here. The feature never worked and will\nsilently cause wrong query results. Fixing it seems like a pretty large\ntask - there's a lot more bugs. But ripping out a feature in stable\nbranches is pretty bad too.\n\n\nBefore figuring out the above, I spent the last several days trying to\nmake this feature work with my snapshot scalability patch. Trying to\navoid regressing old_snapshot_threshold behaviour / performance. But not\nit seems to me that there's no actual working feature that can be\npreserved.\n\n\nI am really tired.\n\n\nAndres.\n\n\n", "msg_date": "Tue, 31 Mar 2020 23:40:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 2:40 AM Andres Freund <andres@anarazel.de> wrote:\n> The problem is that there's no protection again the xids in the\n> ringbuffer getting old enough to wrap around. Given that practical uses\n> of old_snapshot_threshold are likely to be several hours to several\n> days, that's not particularly hard to hit.\n\nPresumably this could be fixed by changing it to use FullTransactionId.\n\n> The problem, as far as I can tell, is that\n> oldSnapshotControl->head_timestamp appears to be intended to be the\n> oldest value in the ring. But we update it unconditionally in the \"need\n> a new bucket, but it might not be the very next one\" branch of\n> MaintainOldSnapshotTimeMapping().\n\nI agree, that doesn't look right. It's correct, I think, for the \"if\n(advance >= OLD_SNAPSHOT_TIME_MAP_ENTRIES)\" case, but not in the\n\"else\" case. In the \"else\" case, it should advance by 1 (wrapping if\nneeded) each time we take the \"if (oldSnapshotControl->count_used ==\nOLD_SNAPSHOT_TIME_MAP_ENTRIES)\" branch, and should remain unchanged in\nthe \"else\" branch for that if statement.\n\n> As far as I can tell, this code has been wrong since the feature has\n> been committed. The tests don't show a problem, because none of this\n> code is reached when old_snapshot_threshold = 0 (which has no real world\n> use, it's purely for testing).\n\nI'm pretty sure I complained about the fact that only the\nold_snapshot_threshold = 0 case was tested at the time this went in,\nbut I don't think Kevin was too convinced that we needed to do\nanything else, and realistically, if he'd tried for a regression test\nthat ran for 15 minutes, Tom would've gone ballistic.\n\n> I really don't know what to do here. The feature never worked and will\n> silently cause wrong query results. Fixing it seems like a pretty large\n> task - there's a lot more bugs. But ripping out a feature in stable\n> branches is pretty bad too.\n\nI don't know what other bugs there are, but the two you mention above\nlook fixable. Even if we decide that the feature can't be salvaged, I\nwould vote against ripping it out in back branches. I would instead\nargue for telling people not to use it and ripping it out in master.\nHowever, much as I'm not in love with all of the complexity this\nfeature adds, I don't see the problems you've reported here as serious\nenough to justify ripping it out.\n\nWhat exactly is the interaction of this patch with your snapshot\nscalability work?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 10:01:07 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-03-31 23:40:08 -0700, Andres Freund wrote:\n> I added some debug output to print the mapping before/after changes by\n> MaintainOldSnapshotTimeMapping() (note that I used timestamps relative\n> to the server start in minutes/seconds to make it easier to interpret).\n\nNow attached.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 1 Apr 2020 07:01:28 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 10:01:07 -0400, Robert Haas wrote:\n> On Wed, Apr 1, 2020 at 2:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > The problem is that there's no protection again the xids in the\n> > ringbuffer getting old enough to wrap around. Given that practical uses\n> > of old_snapshot_threshold are likely to be several hours to several\n> > days, that's not particularly hard to hit.\n> \n> Presumably this could be fixed by changing it to use FullTransactionId.\n\nThat doesn't exist in all the back branches. Think it'd be easier to add\ncode to explicitly prune it during MaintainOldSnapshotTimeMapping().\n\n\n> > The problem, as far as I can tell, is that\n> > oldSnapshotControl->head_timestamp appears to be intended to be the\n> > oldest value in the ring. But we update it unconditionally in the \"need\n> > a new bucket, but it might not be the very next one\" branch of\n> > MaintainOldSnapshotTimeMapping().\n> \n> I agree, that doesn't look right. It's correct, I think, for the \"if\n> (advance >= OLD_SNAPSHOT_TIME_MAP_ENTRIES)\" case, but not in the\n> \"else\" case. In the \"else\" case, it should advance by 1 (wrapping if\n> needed) each time we take the \"if (oldSnapshotControl->count_used ==\n> OLD_SNAPSHOT_TIME_MAP_ENTRIES)\" branch, and should remain unchanged in\n> the \"else\" branch for that if statement.\n\nYea.\n\n\n> > As far as I can tell, this code has been wrong since the feature has\n> > been committed. The tests don't show a problem, because none of this\n> > code is reached when old_snapshot_threshold = 0 (which has no real world\n> > use, it's purely for testing).\n> \n> I'm pretty sure I complained about the fact that only the\n> old_snapshot_threshold = 0 case was tested at the time this went in,\n> but I don't think Kevin was too convinced that we needed to do\n> anything else, and realistically, if he'd tried for a regression test\n> that ran for 15 minutes, Tom would've gone ballistic.\n\nI think it's not just Tom that'd have gone ballistic. I think it's the\nreason why, as I think is pretty clear, the feature was *never* actually\ntested. The results of whats being removed are not quite random, but\nit's not far from it. And there's long stretches of time where it never\nremoves things.\n\nIt's also a completely self-made problem.\n\nThere's really no reason at all to have bins of one minute. As it's a\nPGC_POSTMASTER GUC, it should just have didided time into bins of\n(old_snapshot_threshold * USEC_PER_SEC) / 100 or such. For a threshold\nof a week there's no need to keep 10k bins, and the minimum threshold of\n1 minute obviously is problematic.\n\n\n> > I really don't know what to do here. The feature never worked and will\n> > silently cause wrong query results. Fixing it seems like a pretty large\n> > task - there's a lot more bugs. But ripping out a feature in stable\n> > branches is pretty bad too.\n> \n> I don't know what other bugs there are, but the two you mention above\n> look fixable.\n\nThey probably are fixable. But there's a lot more, I think:\n\nLooking at TransactionIdLimitedForOldSnapshots() I think the ts ==\nupdate_ts threshold actually needs to be ts >= update_ts, right now we\ndon't handle being newer than the newest bin correctly afaict (mitigated\nby autovacuum=on with naptime=1s doing a snapshot more often). It's hard\nto say, because there's no comments.\n\nThe whole lock nesting is very hazardous. Most (all?)\nTestForOldSnapshot() calls happen with locks on on buffers held, and can\nacquire lwlocks itself. In some older branches we do entire *catalog\nsearches* with the buffer lwlock held (for RelationHasUnloggedIndex()).\n\nGetSnapshotData() using snapshot->lsn = GetXLogInsertRecPtr(); as the\nbasis to detect conflicts seems dangerous to me. Isn't that ignoring\ninserts that are already in progress?\n\n\n> Even if we decide that the feature can't be salvaged, I would vote\n> against ripping it out in back branches. I would instead argue for\n> telling people not to use it and ripping it out in master.\n\nIt currently silently causes wrong query results. There's no\ninfrastructure to detect / protect against that (*).\n\nI'm sure we can fix individual instances of problems. But I don't know\nhow one is supposed to verify that the fixes actually work. There's\ncurrently no tests for the actual feature. And manual tests are painful\ndue to the multi-minute thresholds needed, and it's really hard to\nmanually verify that only the right rows are removed due to the feature,\nand that all necessary errors are thrown. Given e.g. the bugs in my\nemail upthread, there's periods of several minutes where we'd not see\nany row removed and then periods where the wrong ones would be removed,\nso the manual tests would have to be repeated numerous times to actually\nensure anything.\n\nIf somebody wants to step up to the plate and fix these, it'd perhaps be\nmore realistic to say that we'll keep the feature. But even if somebody\ndoes, I think it'd require a lot of development in the back branches. On\na feature whose purpose is to eat data that is still required.\n\nI think even if we decide that we do not want to rip the feature out, we\nshould seriously consider hard disabling it in the backbranches. At\nleast I don't see how the fixed code is tested enough to be entrusted\nwith users data.\n\nDo we actually have any evidence of this feature ever beeing used? I\ndidn't find much evidence for that in the archives (except Thomas\nfinding a problem). Given that it currently will switch between not\npreventing bloat and causing wrong query results, without that being\nnoticed...\n\n(*) perhaps I just am not understanding the protection however. To me\nit's not at all clear what:\n\n\t\t/*\n\t\t * Failsafe protection against vacuuming work of active transaction.\n\t\t *\n\t\t * This is not an assertion because we avoid the spinlock for\n\t\t * performance, leaving open the possibility that xlimit could advance\n\t\t * and be more current; but it seems prudent to apply this limit. It\n\t\t * might make pruning a tiny bit less aggressive than it could be, but\n\t\t * protects against data loss bugs.\n\t\t */\n\t\tif (TransactionIdIsNormal(latest_xmin)\n\t\t\t&& TransactionIdPrecedes(latest_xmin, xlimit))\n\t\t\txlimit = latest_xmin;\n\n\t\tif (NormalTransactionIdFollows(xlimit, recentXmin))\n\t\t\treturn xlimit;\n\nactually provides in the way of a protection.\n\n\n> However,\n> much as I'm not in love with all of the complexity this feature adds,\n> I don't see the problems you've reported here as serious enough to\n> justify ripping it out.\n> \n> What exactly is the interaction of this patch with your snapshot\n> scalability work?\n\nPost my work there's no precise RecentOldestXmin anymore (since\naccessing the frequently changing xmin of other backends is what causes\na good chunk of the scalability issues). But heap_page_prune_opt() has\nto determine what to use as the threshold for being able to prune dead\nrows. Without snapshot_too_old we can initially rely on the known\nboundaries to determine whether we can prune, and only determine an\n\"accurate\" boundary when encountering a prune xid (or a tuple, but\nthat's an optimization) that falls in the range where we don't know for\ncertain we can prune. But that's not easy to do with the way the\nold_snapshot_threshold stuff currently works.\n\nIt's not too hard to implement a crude version that just determines an\naccurate xmin horizon whenever pruning with old_snapshot_threshold\nset. But that seems like gimping performance for old_snapshot_threshold,\nwhich didn't seem nice.\n\nAdditionally, the current implementation of snapshot_too_old is pretty\nterrible about causing unnecessary conflicts when hot pruning. Even if\nthere was no need at all for the horizon to be limited to be able to\nprune the page, or if there was nothing to prune on the page (note that\nthe limiting happens before checking if the space on the page even makes\npruning useful), we still cause a conflict for future accesses, because\nTransactionIdLimitedForOldSnapshots() will\nSetOldSnapshotThresholdTimestamp() to a recent timestamp.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 08:09:14 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 2:40 AM Andres Freund <andres@anarazel.de> wrote:\n> I added some debug output to print the mapping before/after changes by\n> MaintainOldSnapshotTimeMapping() (note that I used timestamps relative\n> to the server start in minutes/seconds to make it easier to interpret).\n>\n> And the output turns out to be something like:\n>\n> WARNING: old snapshot mapping at \"before update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n> entry 0 (ring 0): min 7: xid 582921233\n> entry 1 (ring 1): min 8: xid 654154155\n> entry 2 (ring 2): min 9: xid 661972949\n> entry 3 (ring 3): min 10: xid 666899382\n> entry 4 (ring 4): min 11: xid 644169619\n> entry 5 (ring 5): min 12: xid 644169619\n> entry 6 (ring 6): min 13: xid 644169619\n> entry 7 (ring 7): min 14: xid 644169619\n>\n> WARNING: head 420 s: updating existing bucket 4 for sec 660 with xmin 666899382\n>\n> WARNING: old snapshot mapping at \"after update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n> entry 0 (ring 0): min 7: xid 582921233\n> entry 1 (ring 1): min 8: xid 654154155\n> entry 2 (ring 2): min 9: xid 661972949\n> entry 3 (ring 3): min 10: xid 666899382\n> entry 4 (ring 4): min 11: xid 666899382\n> entry 5 (ring 5): min 12: xid 644169619\n> entry 6 (ring 6): min 13: xid 644169619\n> entry 7 (ring 7): min 14: xid 644169619\n>\n> It's pretty obvious that the xids don't make a ton of sense, I think:\n> They're not monotonically ordered. The same values exist multiple times,\n> despite xids being constantly used. Also, despite the ringbuffer\n> supposedly having 15 entries (that's snapshot_too_old = 5min + the 10 we\n> always add), and the workload having run for 14min, we only have 8\n> entries.\n\nThe function header comment for MaintainOldSnapshotTimeMapping could\nhardly be more vague, as it's little more than a restatement of the\nfunction name. However, it looks to me like the idea is that this\nfunction might get called multiple times for the same or similar\nvalues of whenTaken. I suppose that's the point of this code:\n\n else if (ts <= (oldSnapshotControl->head_timestamp +\n ((oldSnapshotControl->count_used - 1)\n * USECS_PER_MINUTE)))\n {\n /* existing mapping; advance xid if possible */\n int bucket = (oldSnapshotControl->head_offset\n + ((ts - oldSnapshotControl->head_timestamp)\n / USECS_PER_MINUTE))\n % OLD_SNAPSHOT_TIME_MAP_ENTRIES;\n\n if (TransactionIdPrecedes(oldSnapshotControl->xid_by_minute[bucket],\nxmin))\n oldSnapshotControl->xid_by_minute[bucket] = xmin;\n }\n\nWhat I interpret this to be doing is saying - if we got a new call to\nthis function with a rounded-to-the-minute timestamp that we've seen\npreviously and for which we still have an entry, and if the XID passed\nto this function is newer than the one passed by the previous call,\nthen advance the xid_by_minute[] bucket to the newer value. Now that\nbegs the question - what does this XID actually represent? The\ncomments don't seem to answer that question, not even the comments for\nOldSnapshotControlData, which say that we should \"Keep one xid per\nminute for old snapshot error handling.\" but don't say which XIDs we\nshould keep or how they'll be used. However, the only call to\nMaintainOldSnapshotTimeMapping() is in GetSnapshotData(). It appears\nthat we call this function each time a new snapshot is taken and pass\nthe current time (modulo some fiddling) and snapshot xmin. Given that,\none would expect that any updates to the map would be tight races,\ni.e. a bunch of processes that all took their snapshots right around\nthe same time would all update the same map entry in quick succession,\nwith the newest value winning.\n\nAnd that make the debugging output which I quoted from your message\nabove really confusing. At this point, the \"head timestamp\" is 7\nminutes after this facility started up. The first we entry we have is\nfor minute 7, and the last is for minute 14. But the one we're\nupdating is for minute 11. How the heck can that happen? I might\nsuspect that you'd stopped a process inside GetSnapshotData() with a\ndebugger, but that can't explain it either, because GetSnapshotData()\ngets the xmin first and only afterwards gets the timestamp - so if\nyou'd stopped for ~3 minutes it just before the call to\nMaintainOldSnapshotTimeMapping(), it would've been updating the map\nwith an *old* XID. In reality, though, it changed the XID from\n644169619 to 666899382, advancing over 22 million XIDs. I don't\nunderstand what's going on there. How is this function getting called\nwith a 4-minute old value of whenTaken?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 11:15:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 11:15:14 -0400, Robert Haas wrote:\n> On Wed, Apr 1, 2020 at 2:40 AM Andres Freund <andres@anarazel.de> wrote:\n> > I added some debug output to print the mapping before/after changes by\n> > MaintainOldSnapshotTimeMapping() (note that I used timestamps relative\n> > to the server start in minutes/seconds to make it easier to interpret).\n> >\n> > And the output turns out to be something like:\n> >\n> > WARNING: old snapshot mapping at \"before update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n> > entry 0 (ring 0): min 7: xid 582921233\n> > entry 1 (ring 1): min 8: xid 654154155\n> > entry 2 (ring 2): min 9: xid 661972949\n> > entry 3 (ring 3): min 10: xid 666899382\n> > entry 4 (ring 4): min 11: xid 644169619\n> > entry 5 (ring 5): min 12: xid 644169619\n> > entry 6 (ring 6): min 13: xid 644169619\n> > entry 7 (ring 7): min 14: xid 644169619\n> >\n> > WARNING: head 420 s: updating existing bucket 4 for sec 660 with xmin 666899382\n> >\n> > WARNING: old snapshot mapping at \"after update\" with head ts: 7, current entries: 8 max entries: 15, offset: 0\n> > entry 0 (ring 0): min 7: xid 582921233\n> > entry 1 (ring 1): min 8: xid 654154155\n> > entry 2 (ring 2): min 9: xid 661972949\n> > entry 3 (ring 3): min 10: xid 666899382\n> > entry 4 (ring 4): min 11: xid 666899382\n> > entry 5 (ring 5): min 12: xid 644169619\n> > entry 6 (ring 6): min 13: xid 644169619\n> > entry 7 (ring 7): min 14: xid 644169619\n> >\n> > It's pretty obvious that the xids don't make a ton of sense, I think:\n> > They're not monotonically ordered. The same values exist multiple times,\n> > despite xids being constantly used. Also, despite the ringbuffer\n> > supposedly having 15 entries (that's snapshot_too_old = 5min + the 10 we\n> > always add), and the workload having run for 14min, we only have 8\n> > entries.\n> \n> The function header comment for MaintainOldSnapshotTimeMapping could\n> hardly be more vague, as it's little more than a restatement of the\n> function name. However, it looks to me like the idea is that this\n> function might get called multiple times for the same or similar\n> values of whenTaken. I suppose that's the point of this code:\n\nRight. We enforce whenTaken to be monotonic\n(cf. GetSnapshotCurrentTimestamp()), but since\nGetSnapshotCurrentTimestamp() reduces the granularity of the timestamp\nto one-minute (the AlignTimestampToMinuteBoundary() call), it's\nobviously possible to end up in the same bin as a previous\n\n\n> What I interpret this to be doing is saying - if we got a new call to\n> this function with a rounded-to-the-minute timestamp that we've seen\n> previously and for which we still have an entry, and if the XID passed\n> to this function is newer than the one passed by the previous call,\n> then advance the xid_by_minute[] bucket to the newer value. Now that\n> begs the question - what does this XID actually represent? The\n> comments don't seem to answer that question, not even the comments for\n> OldSnapshotControlData, which say that we should \"Keep one xid per\n> minute for old snapshot error handling.\" but don't say which XIDs we\n> should keep or how they'll be used. However, the only call to\n> MaintainOldSnapshotTimeMapping() is in GetSnapshotData(). It appears\n> that we call this function each time a new snapshot is taken and pass\n> the current time (modulo some fiddling) and snapshot xmin. Given that,\n> one would expect that any updates to the map would be tight races,\n> i.e. a bunch of processes that all took their snapshots right around\n> the same time would all update the same map entry in quick succession,\n> with the newest value winning.\n\nRight.\n\n\n> And that make the debugging output which I quoted from your message\n> above really confusing. At this point, the \"head timestamp\" is 7\n> minutes after this facility started up. The first we entry we have is\n> for minute 7, and the last is for minute 14. But the one we're\n> updating is for minute 11. How the heck can that happen?\n\nIf I undestand what your reference correctly, I think that is because,\ndue to the bug, the \"need a new bucket\" branch doesn't just extend by\none bucket, it extends it by many in common cases. Basically filling\nbuckets \"into the future\".\n\nthe advance = ... variable in the branch will not always be 1, even when\nwe continually call Maintain*. Here's some debug output showing that\n(slightly modified from the patch I previously sent):\n\nWARNING: old snapshot mapping at \"before update\" with head ts: 1, current entries: 2 max entries: 15, offset: 0\n entry 0 (ring 0): min 1: xid 1089371384\n entry 1 (ring 1): min 2: xid 1099553206\n\nWARNING: head 1 min: filling 2 buckets starting at 0 for whenTaken 3 min, with xmin 1109840204\n\nWARNING: old snapshot mapping at \"after update\" with head ts: 3, current entries: 4 max entries: 15, offset: 0\n entry 0 (ring 0): min 3: xid 1089371384\n entry 1 (ring 1): min 4: xid 1099553206\n entry 2 (ring 2): min 5: xid 1109840204\n entry 3 (ring 3): min 6: xid 1109840204\n\nNote how the two new buckets have the same xid, and how we're inserting\nfor \"whenTaken 3 min\", but we've filled the mapping up to minute 6.\n\n\nI don't think the calculation of the 'advance' variable is correct as\nis, even if we ignore the wrong setting of the head_timestamp variable.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 08:39:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 11:09 AM Andres Freund <andres@anarazel.de> wrote:\n> That doesn't exist in all the back branches. Think it'd be easier to add\n> code to explicitly prune it during MaintainOldSnapshotTimeMapping().\n\nThat's reasonable.\n\n> There's really no reason at all to have bins of one minute. As it's a\n> PGC_POSTMASTER GUC, it should just have didided time into bins of\n> (old_snapshot_threshold * USEC_PER_SEC) / 100 or such. For a threshold\n> of a week there's no need to keep 10k bins, and the minimum threshold of\n> 1 minute obviously is problematic.\n\nI am very doubtful that this approach would have been adequate. It\nwould mean that, with old_snapshot_threshold set to a week, the\nthreshold for declaring a snapshot \"old\" would jump forward 16.8 hours\nat a time. It's hard for me to make a coherent argument right now as\nto exactly what problems that would create, but it's not very\ngranular, and a lot of bloat-related things really benefit from more\ngranularity. I also don't really see what the problem with keeping a\nbucket per minute in memory is, even for a week. It's only 60 * 24 * 7\n= ~10k buckets, isn't it? That's not really insane for an in-memory\ndata structure. I agree that the code that does that maintenance being\nbuggy is a problem to whatever extent that is the case, but writing\nthe code to have fewer buckets wouldn't necessarily have made it any\nless buggy.\n\n> They probably are fixable. But there's a lot more, I think:\n>\n> Looking at TransactionIdLimitedForOldSnapshots() I think the ts ==\n> update_ts threshold actually needs to be ts >= update_ts, right now we\n> don't handle being newer than the newest bin correctly afaict (mitigated\n> by autovacuum=on with naptime=1s doing a snapshot more often). It's hard\n> to say, because there's no comments.\n\nThat test and the following one for \"if (ts == update_ts)\" both make\nme nervous too. If only two of <, >, and = are expected, there should\nbe an Assert() to that effect, at least. If all three values are\nexpected then we need an explanation of why we're only checking for\nequality.\n\n> The whole lock nesting is very hazardous. Most (all?)\n> TestForOldSnapshot() calls happen with locks on on buffers held, and can\n> acquire lwlocks itself. In some older branches we do entire *catalog\n> searches* with the buffer lwlock held (for RelationHasUnloggedIndex()).\n\nThe catalog searches are clearly super-bad, but I'm not sure that the\nother ones have a deadlock risk or anything. They might, but I think\nwe'd need some evidence of that.\n\n> GetSnapshotData() using snapshot->lsn = GetXLogInsertRecPtr(); as the\n> basis to detect conflicts seems dangerous to me. Isn't that ignoring\n> inserts that are already in progress?\n\nHow so?\n\n> It currently silently causes wrong query results. There's no\n> infrastructure to detect / protect against that (*).\n\nSure, and what if you break more stuff ripping it out? Ripping this\nvolume of code out in a supposedly-stable branch is totally insane\nalmost no matter how broken the feature is. I also think, and we've\nhad this disagreement before, that you're far too willing to say\n\"well, that's wrong so we need to hit it with a nuke.\" I complained\nwhen you added those error checks to vacuum in back-branches, and\nsince that release went out people are regularly tripping those checks\nand taking prolonged outages for a problem that wasn't making them\nunhappy before. I know that in theory those people are better off\nbecause their database was always corrupted and now they know. But for\nsome of them, those prolonged outages are worse than the problem they\nhad before. I believe it was irresponsible to decide on behalf of our\nentire user base that they were better off with such a behavior change\nin a supposedly-stable branch, and I believe the same thing here.\n\nI have no objection to the idea that *if* the feature is hopelessly\nbroken, it should be removed. But I don't have confidence at this\npoint that you've established that, and I think ripping out thousands\nof lines of codes in the back-branches is terrible. Even\nhard-disabling the feature in the back-branches without actually\nremoving the code is an awfully strong reaction, but it could be\njustified if we find out that things are actually super-bad and not\nreally fixable. Actually removing the code is unnecessary, protects\nnobody, and has risk.\n\n> Do we actually have any evidence of this feature ever beeing used? I\n> didn't find much evidence for that in the archives (except Thomas\n> finding a problem). Given that it currently will switch between not\n> preventing bloat and causing wrong query results, without that being\n> noticed...\n\nI believe that at least one EnterpriseDB customer used it, and\npossibly more than one. I am not sure how extensively, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 12:02:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 9:02 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I complained\n> when you added those error checks to vacuum in back-branches, and\n> since that release went out people are regularly tripping those checks\n> and taking prolonged outages for a problem that wasn't making them\n> unhappy before. I know that in theory those people are better off\n> because their database was always corrupted and now they know. But for\n> some of them, those prolonged outages are worse than the problem they\n> had before. I believe it was irresponsible to decide on behalf of our\n> entire user base that they were better off with such a behavior change\n> in a supposedly-stable branch, and I believe the same thing here.\n\nI agreed with that decision, FWIW. Though I don't deny that there is\nsome merit in what you say. This is the kind of high level\nphilosophical question where large differences of opinion are quite\nnormal.\n\nI don't think that it's fair to characterize Andres' actions in that\nsituation as in any way irresponsible. We had an extremely complicated\ndata corruption bug that he went to great lengths to fix, following\ntwo other incorrect fixes. He was jet lagged from travelling to India\nat the time. He went to huge lengths to make sure that the bug was\ncorrectly squashed.\n\n> Actually removing the code is unnecessary, protects\n> nobody, and has risk.\n\nEvery possible approach has risk. We are deciding among several\nunpleasant and risky alternatives here, no?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 10:03:37 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 1:03 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I don't think that it's fair to characterize Andres' actions in that\n> situation as in any way irresponsible. We had an extremely complicated\n> data corruption bug that he went to great lengths to fix, following\n> two other incorrect fixes. He was jet lagged from travelling to India\n> at the time. He went to huge lengths to make sure that the bug was\n> correctly squashed.\n\nI don't mean it as a personal attack on Andres, and I know and am glad\nthat he worked hard on the problem, but I don't agree that it was the\nright decision. Perhaps \"irresponsible\" is the wrong word, but it's\ncertainly caused problems for multiple EnterpriseDB customers, and in\nmy view, those problems weren't necessary. Either a WARNING or an\nERROR would have shown up in the log, but an ERROR terminates VACUUM\nfor that table and thus basically causes autovacuum to be completely\nbroken. That is a really big problem. Perhaps you will want to argue,\nas Andres did, that the value of having ERROR rather than WARNING in\nthe log justifies that outcome, but I sure don't agree.\n\n> > Actually removing the code is unnecessary, protects\n> > nobody, and has risk.\n>\n> Every possible approach has risk. We are deciding among several\n> unpleasant and risky alternatives here, no?\n\nSure, but not all levels of risk are equal. Jumping out of a plane\ncarries some risk of death whether or not you have a parachute, but\nthat does not mean that we shouldn't worry about whether you have one\nor not before you jump.\n\nIn this case, I think it is pretty clear that hard-disabling the\nfeature by always setting old_snapshot_threshold to -1 carries less\nrisk of breaking unrelated things than removing code that caters to\nthe feature all over the code base. Perhaps it is not quite as\ndramatic as my parachute example, but I think it is pretty clear all\nthe same that one is a lot more likely to introduce new bugs than the\nother. A carefully targeted modification of a few lines of code in 1\nfile just about has to carry less risk than ~1k lines of code spread\nacross 40 or so files.\n\nHowever, I still think that without some more analysis, it's not clear\nwhether we should go this direction at all. Andres's results suggest\nthat there are some bugs here, but I think we need more senior hackers\nto study the situation before we make a decision about what to do\nabout them. I certainly haven't had enough time to even fully\nunderstand the problems yet, and nobody else has posted on that topic\nat all. I have the highest respect for Andres and his technical\nability, and if he says this stuff has problems, I'm sure it does. Yet\nI'm not willing to conclude that because he's tired and frustrated\nwith this stuff right now, it's unsalvageable. For the benefit of the\nwhole community, such a claim deserves scrutiny from multiple people.\n\nIs there any chance that you're planning to look into the details?\nThat would certainly be welcome from my perspective.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 13:27:56 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 12:02:18 -0400, Robert Haas wrote:\n> On Wed, Apr 1, 2020 at 11:09 AM Andres Freund <andres@anarazel.de> wrote:\n> > There's really no reason at all to have bins of one minute. As it's a\n> > PGC_POSTMASTER GUC, it should just have didided time into bins of\n> > (old_snapshot_threshold * USEC_PER_SEC) / 100 or such. For a threshold\n> > of a week there's no need to keep 10k bins, and the minimum threshold of\n> > 1 minute obviously is problematic.\n>\n> I am very doubtful that this approach would have been adequate. It\n> would mean that, with old_snapshot_threshold set to a week, the\n> threshold for declaring a snapshot \"old\" would jump forward 16.8 hours\n> at a time. It's hard for me to make a coherent argument right now as\n> to exactly what problems that would create, but it's not very\n> granular, and a lot of bloat-related things really benefit from more\n> granularity. I also don't really see what the problem with keeping a\n> bucket per minute in memory is, even for a week. It's only 60 * 24 * 7\n> = ~10k buckets, isn't it? That's not really insane for an in-memory\n> data structure. I agree that the code that does that maintenance being\n> buggy is a problem to whatever extent that is the case, but writing\n> the code to have fewer buckets wouldn't necessarily have made it any\n> less buggy.\n\nMy issue isn't really that it's too many buckets right now, but that it\ndoesn't scale down to smaller thresholds. I think to be able to develop\nthis reasonably, it'd need to be able to support thresholds in the\nsub-second range. And I don't see how you can have the same binning for\nsuch small thresholds, and for multi-day thresholds - we'd quickly go to\nmillions of buckets for longer thresholds.\n\nI really think we'd need to support millisecond resolution to make this\nproperly testable.\n\n\n> > GetSnapshotData() using snapshot->lsn = GetXLogInsertRecPtr(); as the\n> > basis to detect conflicts seems dangerous to me. Isn't that ignoring\n> > inserts that are already in progress?\n\n> How so?\n\nBecause it returns the end of the reserved WAL, not how far we've\nactually inserted. I.e. there can be in-progress, but not finished,\nmodifications that will have an LSN < GetXLogInsertRecPtr(). But the\nwhenTaken timestamp could reflect one that should throw an error for\nthese in-progress modifications (since the transaction limiting happens\nbefore the WAL logging).\n\nI am not 100%, but I suspect that that could lead to errors not being\nthrown that should, because TestForOldSnapshot() will not see these\nin-progress modifications as conflicting.\n\nHm, also, shouldn't\n\t\t&& PageGetLSN(page) > (snapshot)->lsn)\nin TestForOldSnapshot() be an >=?\n\n\n> > It currently silently causes wrong query results. There's no\n> > infrastructure to detect / protect against that (*).\n>\n> Sure, and what if you break more stuff ripping it out? Ripping this\n> volume of code out in a supposedly-stable branch is totally insane\n> almost no matter how broken the feature is.\n\nFor the backbranches I was just thinking of forcing the GUC to be off\n(either by disallowing it to be set to on, or just warning when its set\nto true, but not propagating the value).\n\n\n> I have no objection to the idea that *if* the feature is hopelessly\n> broken, it should be removed.\n\nI would be a lot less inclined to go that way if old_snapshot_threshold\n\na) weren't explicitly about removing still-needed data - in contrast to\n a lot of other features, where the effects of bugs is temporary, here\n it can be much larger.\nb) were a previously working feature, but as far as I can tell, it never really did\nc) had tests that verify that my fixes actually do the right thing. As\n it stands, I'd not just have to fix the bugs, I'd also have to develop\n a test framework that can test this\n\nWhile I wish I had been more forceful, and reviewed more of the code to\npoint out more of the quality issues, I did argue hard against the\nfeature going in. On account of it being architecturally bad and\nimpactful. Which I think it has proven to be several times over by\nnow. And now I'm kind of on the hook to fix it, it seems?\n\n\n> I also think, and we've had this disagreement before, that you're far\n> too willing to say \"well, that's wrong so we need to hit it with a\n> nuke.\" I complained when you added those error checks to vacuum in\n> back-branches, and since that release went out people are regularly\n> tripping those checks and taking prolonged outages for a problem that\n> wasn't making them unhappy before. I know that in theory those people\n> are better off because their database was always corrupted and now\n> they know. But for some of them, those prolonged outages are worse\n> than the problem they had before.\n\nI think this is a somewhat revisionist. Sure, the errors were added\nafter like the 10th data corruption bug around freezing that we didn't\nfind for a long time, because of the lack of errors being thrown. But\nthe error checks weren't primarily added to find further bugs, but to\nprevent data loss due to the fixed bug. Of which we had field reports.\n\nI'd asked over *weeks* for reviews of the bug fixes. Not a single person\nexpressed concerns about throwing new errors at that time. First version\nof the patches with the errors:\nhttps://postgr.es/m/20171114030341.movhteyakqeqx5pm%40alap3.anarazel.de\nI pushed them over a month later\nhttps://postgr.es/m/20171215023059.oeyindn57oeis5um%40alap3.anarazel.de\n\nThere also wasn't (and isn't) a way to just report back that we can't\ncurrently freeze the individual page, without doing major surgery. And\neven if there were, what are supposed to do other than throw an error?\nWe need to remove tuples below relfrozenxid, or we corrupt the table.\n\nAs I've first asked before when you complained about those errors: What\nwas the alternative? Just have invisible tuples reappear? Delete them? I\ndon't think you've ever answered that.\n\nYou brought this up as an example for me being over-eager with errors\nchecks before. But I don't see how that meshes with the history visible\nin the thread referenced above.\n\n\nThe more general issue, about throwing errors, is not just about the\npeople that don't give a hoot about whether their data evolves on its\nown (perhaps a good tradeoff for them). Not throwing errors affects\n*everyone*. Some people do care about their data. Without errors we\nnever figure out that we screwed up. And long-term, even the people\nthat care much more about availability than data loss, benefit from the\nwhole system getting more robust.\n\nWe've since found numerous further data corrupting bugs because of the\nrelfrozenxid checks. Some of very long standing vintage. Some in newly\nadded code.\n\nYes, hypothetically, I'd argue for introducing the checks solely for the\nsake of finding bugs. Even if I were prescient to forsee the number of\nissues caused (although I'd add block numbers to the error message from\nthe get go, knowing that). But I'd definitely not do so in the back\nbranches.\n\n\n> I believe it was irresponsible to decide on behalf of our entire user\n> base that they were better off with such a behavior change in a\n> supposedly-stable branch, and I believe the same thing here.\n\nAs I explained above, I don't think that's fair with regard to the\nrelfrozenxid errors. Setting that aside:\n\nIn these discussions you always seem to only argue for the people that\ndon't care about their data. But, uh, a lot of people do - should we\njust silently eat their data? And the long-term quality of the project\ngets a lot better by throwing errors, because it actually allows us to\nfix them.\n\nAs far as I can tell we couldn't even have added the checks to master,\nback then, if we follow your logic: A lot of the reports about hitting\nthe errors were with 11+ (partially due to pg_upgrade, partially because\nthey detected other bugs).\n\n\nThe likelihood of hurting people by adding checks at a later point would\nbe a lot lower, if we stopped adding code that ignores errors silently\nand hoping for the best. But we keep adding such \"lenient\" code.\n\n\nWe just found another long-standing cause of data corrupting, that\nshould have been found earlier if we had errors, or at least warnings,\nbtw. The locking around vac_udpate_datfrozenxid() has been broken for a\nlong long time, but the silent 'if (bogus) return' made it very hard to\nfind.\nhttps://www.postgresql.org/message-id/20200323235036.6pje6usrjjx22zv3%40alap3.anarazel.de\n\nAlso, I've recently seen a number of databases beeing eaten because we\njust ignore our own WAL logging rules to avoid throwing hard enough\nerrors (RelationTruncate() WAL logging the truncation outside of a\ncritical section - oops if you hit it, your primary and replicas/backups\ndiverge, among many other bad consequences).\n\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 11:01:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 10:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Sure, but not all levels of risk are equal. Jumping out of a plane\n> carries some risk of death whether or not you have a parachute, but\n> that does not mean that we shouldn't worry about whether you have one\n> or not before you jump.\n>\n> In this case, I think it is pretty clear that hard-disabling the\n> feature by always setting old_snapshot_threshold to -1 carries less\n> risk of breaking unrelated things than removing code that caters to\n> the feature all over the code base. Perhaps it is not quite as\n> dramatic as my parachute example, but I think it is pretty clear all\n> the same that one is a lot more likely to introduce new bugs than the\n> other. A carefully targeted modification of a few lines of code in 1\n> file just about has to carry less risk than ~1k lines of code spread\n> across 40 or so files.\n\nYeah, that's certainly true. But is that fine point really what\nanybody disagrees about? I didn't think that Andres was focussed on\nliterally ripping it out over just disabling it.\n\n> Is there any chance that you're planning to look into the details?\n> That would certainly be welcome from my perspective.\n\nI had a few other things that I was going to work on this week, but\nthose seems less urgent. I'll take a look into it, and report back\nwhat I find.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 11:04:43 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 11:04:43 -0700, Peter Geoghegan wrote:\n> On Wed, Apr 1, 2020 at 10:28 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > Is there any chance that you're planning to look into the details?\n> > That would certainly be welcome from my perspective.\n\n+1\n\nThis definitely needs more eyes. I am not even close to understanding\nthe code fully.\n\n\n> I had a few other things that I was going to work on this week, but\n> those seems less urgent. I'll take a look into it, and report back\n> what I find.\n\nThanks you!\n\nI attached a slightly evolved version of my debugging patch.\n\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 1 Apr 2020 11:18:55 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 13:27:56 -0400, Robert Haas wrote:\n> Perhaps \"irresponsible\" is the wrong word, but it's certainly caused\n> problems for multiple EnterpriseDB customers, and in my view, those\n> problems weren't necessary. Either a WARNING or an ERROR would have\n> shown up in the log, but an ERROR terminates VACUUM for that table and\n> thus basically causes autovacuum to be completely broken. That is a\n> really big problem. Perhaps you will want to argue, as Andres did,\n> that the value of having ERROR rather than WARNING in the log\n> justifies that outcome, but I sure don't agree.\n\nIf that had been a really viable option, I would have done so. At the\nvery least in the back branches, but quite possibly also in master. Or\nif somebody had brought them up as an issue at the time.\n\nWhat is heap_prepare_freeze_tuple/FreezeMultiXactId supposed to do after\nissuing a WARNING in these cases. Without the ERROR, e.g.,\n\t\t\tif (!TransactionIdDidCommit(xid))\n\t\t\t\tereport(ERROR,\n\t\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t\t\t errmsg_internal(\"uncommitted xmin %u from before xid cutoff %u needs to be frozen\",\n\t\t\t\t\t\t\t\t\t\t xid, cutoff_xid)));\nwould make a deleted tuple visible.\n\n\n\t\tif (TransactionIdPrecedes(xid, relfrozenxid))\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t\t errmsg_internal(\"found xmin %u from before relfrozenxid %u\",\n\t\t\t\t\t\t\t\t\t xid, relfrozenxid)));\nwould go on replace xmin of a potentially uncommitted tuple with\nrelfrozenxid, making it appear visible.\n\n\n\t\tif (TransactionIdPrecedes(xid, relfrozenxid))\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t\t errmsg_internal(\"found xmax %u from before relfrozenxid %u\",\n\t\t\t\t\t\t\t\t\t xid, relfrozenxid)));\nwould replace the xmax indicating a potentially deleted tuple with ?, either\nmaking the tuple become, potentially wrongly, visible/invisible\n\nor\n\telse if (MultiXactIdPrecedes(multi, relminmxid))\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_DATA_CORRUPTED),\n\t\t\t\t errmsg_internal(\"found multixact %u from before relminmxid %u\",\n\t\t\t\t\t\t\t\t multi, relminmxid)));\nor ...\n\n\nJust continuing is easier said than done. Especially with the background\nof knowing that several users had hit the bug that allowed all of the\nabove to be hit, and that advancing relfrozenxid further would make it\nworse.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 11:37:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 2:37 PM Andres Freund <andres@anarazel.de> wrote:\n> Just continuing is easier said than done. Especially with the background\n> of knowing that several users had hit the bug that allowed all of the\n> above to be hit, and that advancing relfrozenxid further would make it\n> worse.\n\nFair point, but it seems we're arguing over nothing here, or at least\nnothing relevant to this thread, because it sounds like if we are\ngoing to disable that you're OK with doing that by just shutting it\noff the code rather than trying to remove it all. I had the opposite\nimpression from your first email.\n\nSorry to have derailed the thread, and for my poor choice of words.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 15:02:28 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:\n\nFirst off, many thanks to Andres for investigating this, and apologies for\nthe bugs. Also thanks to Michael for making sure I saw the thread. I must\nalso apologize that for not being able to track the community lists\nconsistently due to health issues that are exacerbated by stress, and the\nfact that these lists often push past my current limits. I'll try to help\nin this as best I can.\n\nDo we actually have any evidence of this feature ever beeing used? I\n> didn't find much evidence for that in the archives (except Thomas\n> finding a problem).\n\n\nThis was added because a very large company trying to convert from Oracle\nhad a test that started to show some slowdown on PostgreSQL after 8 hours,\nserious slowdown by 24 hours, and crashed hard before it could get to 48\nhours -- due to lingering WITH HOLD cursors left by ODBC code. They had\nmillions of lines of code that would need to be rewritten without this\nfeature. With this feature (set to 20 minutes, if I recall correctly),\ntheir unmodified code ran successfully for at least three months solid\nwithout failure or corruption. Last I heard, they were converting a large\nnumber of instances from Oracle to PostgreSQL, and those would all fail\nhard within days of running with this feature removed or disabled.\n\nAlso, VMware is using PostgreSQL as an embedded part of many products, and\nthis feature was enabled to deal with similar failures due to ODBC cursors;\nso the number of instances running 24/7 under high load which have shown a\nclear benefit from enabling this feature has a lot of zeros.\n\nPerhaps the lack of evidence for usage in the archives indicates a low\nfrequency of real-world failures due to the feature, rather than lack of\nuse? I'm not doubting that Andres found real issues that should be fixed,\nbut perhaps not very many people who are using the feature have more than\ntwo billion transactions within the time threshold, and perhaps the other\nproblems are not as big as the problems solved by use of the feature -- at\nleast in some cases.\n\nTo save readers who have not yet done the math some effort, at the 20\nminute threshold used by the initial user, they would need to have a\nsustained rate of consumption of transaction IDs of over 66 million per\nsecond to experience wraparound problems, and at the longest threshold I\nhave seen it would need to exceed an average of 461,893 TPS for three days\nsolid to hit wraparound. Those aren't impossible rates to hit, but in\npractice it might not be a frequent occurrence yet on modern hardware with\nsome real-world applications. Hopefully we can find a way to fix this\nbefore those rates become common.\n\nI am reviewing the issue and patches now, and hope I can make some useful\ncontribution to the discussion.\n\n-- \nKevin Grittner\nVMware vCenter Server\nhttps://www.vmware.com/\n\nOn Wed, Apr 1, 2020 at 10:09 AM Andres Freund <andres@anarazel.de> wrote:First off, many thanks to Andres for investigating this, and apologies for the bugs.  Also thanks to Michael for making sure I saw the thread.  I must also apologize that for not being able to track the community lists consistently due to health issues that are exacerbated by stress, and the fact that these lists often push past my current limits.  I'll try to help in this as best I can.Do we actually have any evidence of this feature ever beeing used? I\ndidn't find much evidence for that in the archives (except Thomas\nfinding a problem).This was added because a very large company trying to convert from Oracle had a test that started to show some slowdown on PostgreSQL after 8 hours, serious slowdown by 24 hours, and crashed hard before it could get to 48 hours -- due to lingering WITH HOLD cursors left by ODBC code.  They had millions of lines of code that would need to be rewritten without this feature.  With this feature (set to 20 minutes, if I recall correctly), their unmodified code ran successfully for at least three months solid without failure or corruption.  Last I heard, they were converting a large number of instances from Oracle to PostgreSQL, and those would all fail hard within days of running with this feature removed or disabled.Also, VMware is using PostgreSQL as an embedded part of many products, and this feature was enabled to deal with similar failures due to ODBC cursors; so the number of instances running 24/7 under high load which have shown a clear benefit from enabling this feature has a lot of zeros.Perhaps the lack of evidence for usage in the archives indicates a low frequency of real-world failures due to the feature, rather than lack of use?  I'm not doubting that Andres found real issues that should be fixed, but perhaps not very many people who are using the feature have more than two billion transactions within the time threshold, and perhaps the other problems are not as big as the problems solved by use of the feature -- at least in some cases.To save readers who have not yet done the math some effort, at the 20 minute threshold used by the initial user, they would need to have a sustained rate of consumption of transaction IDs of over 66 million per second to experience wraparound problems, and at the longest threshold I have seen it would need to exceed an average of 461,893 TPS for three days solid to hit wraparound.  Those aren't impossible rates to hit, but in practice it might not be a frequent occurrence yet on modern hardware with some real-world applications.  Hopefully we can find a way to fix this before those rates become common.I am reviewing the issue and patches now, and hope I can make some useful contribution to the discussion.-- Kevin GrittnerVMware vCenter Serverhttps://www.vmware.com/", "msg_date": "Wed, 1 Apr 2020 14:10:09 -0500", "msg_from": "Kevin Grittner <kgrittn@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nNice to have you back for a bit! Even if the circumstances aren't\ngreat...\n\nIt's very understandable that the lists are past your limits, I barely\nkeep up these days. Without any health issues.\n\n\nOn 2020-04-01 14:10:09 -0500, Kevin Grittner wrote:\n> Perhaps the lack of evidence for usage in the archives indicates a low\n> frequency of real-world failures due to the feature, rather than lack of\n> use? I'm not doubting that Andres found real issues that should be fixed,\n> but perhaps not very many people who are using the feature have more than\n> two billion transactions within the time threshold, and perhaps the other\n> problems are not as big as the problems solved by use of the feature -- at\n> least in some cases.\n\n> To save readers who have not yet done the math some effort, at the 20\n> minute threshold used by the initial user, they would need to have a\n> sustained rate of consumption of transaction IDs of over 66 million per\n> second to experience wraparound problems, and at the longest threshold I\n> have seen it would need to exceed an average of 461,893 TPS for three days\n> solid to hit wraparound. Those aren't impossible rates to hit, but in\n> practice it might not be a frequent occurrence yet on modern hardware with\n> some real-world applications. Hopefully we can find a way to fix this\n> before those rates become common.\n\nThe wraparound issue on their own wouldn't be that bad - when I found it\nI did play around with a few ideas for how to fix it. The most practical\nwould probably be to have MaintainOldSnapshotTimeMapping() scan all\nbuckets when a new oldSnapshotControl->oldest_xid is older than\nRecentGlobalXmin. There's no benefit in the contents of those buckets\nanyway, since we know that we can freeze those independent of\nold_snapshot_threshold.\n\nThe thing that makes me really worried is that the contents of the time\nmapping seem very wrong. I've reproduced query results in a REPEATABLE\nREAD transaction changing (pruned without triggering an error). And I've\nreproduced rows not getting removed for much longer than than they\nshould, according to old_snapshot_threshold.\n\n\nI suspect one reason for users not noticing either is that\n\na) it's plausible that users of the feature would mostly have\n long-running queries/transactions querying immutable or insert only\n data. Those would not notice that, on other tables, rows are getting\n removed, where access would not trigger the required error.\n\nb) I observe long-ish phases were no cleanup is happening (due to\n oldSnapshotControl->head_timestamp getting updated more often than\n correct). But if old_snapshot_threshold is small enough in relation to\n the time the generated bloat becomes problematic, there will still be\n occasions to actually perform cleanup.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 12:42:59 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 2:43 PM Andres Freund <andres@anarazel.de> wrote:\n\n> The thing that makes me really worried is that the contents of the time\n> mapping seem very wrong. I've reproduced query results in a REPEATABLE\n> READ transaction changing (pruned without triggering an error).\n\n\nThat is a very big problem. On the sort-of bright side (ironic in light of\nthe fact that I'm a big proponent of using serializable transactions), none\nof the uses that I have personally seen of this feature use anything other\nthan the default READ COMMITTED isolation level. That might help explain\nthe lack of complaints for those using the feature. But yeah, I REALLY\nwant to see a solid fix for that!\n\n\n> And I've\n> reproduced rows not getting removed for much longer than than they\n> should, according to old_snapshot_threshold.\n>\n> I suspect one reason for users not noticing either is that\n>\n> a) it's plausible that users of the feature would mostly have\n> long-running queries/transactions querying immutable or insert only\n> data. Those would not notice that, on other tables, rows are getting\n> removed, where access would not trigger the required error.\n>\n> b) I observe long-ish phases were no cleanup is happening (due to\n> oldSnapshotControl->head_timestamp getting updated more often than\n> correct). But if old_snapshot_threshold is small enough in relation to\n> the time the generated bloat becomes problematic, there will still be\n> occasions to actually perform cleanup.\n>\n\nKeep in mind that the real goal of this feature is not to eagerly _see_\n\"snapshot too old\" errors, but to prevent accidental debilitating bloat due\nto one misbehaving user connection. This is particularly easy to see (and\ntherefore unnervingly common) for those using ODBC, which in my experience\ntends to correspond to the largest companies which are using PostgreSQL.\nIn some cases, the snapshot which is preventing removal of the rows will\nnever be used again; removal of the rows will not actually affect the\nresult of any query, but only the size and performance of the database.\nThis is a \"soft limit\" -- kinda like max_wal_size. Where there was a\ntrade-off between accuracy of the limit and performance, the less accurate\nway was intentionally chosen. I apologize for not making that more clear\nin comments.\n\nWhile occasional \"snapshot too old\" errors are an inconvenient side effect\nof achieving the primary goal, it might be of interest to know that the\ninitial (very large corporate) user of this feature had, under Oracle,\nintentionally used a cursor that would be held open as long as a user chose\nto leave a list open for scrolling around. They used cursor features for\nas long as the cursor allowed. This could be left open for days or weeks\n(or longer?). Their query ordered by a unique index, and tracked the ends\nof the currently displayed portion of the list so that if they happened to\nhit the \"snapshot too old\" error they could deallocate and restart the\ncursor and reposition before moving forward or back to the newly requested\nrows. They were not willing to convert to PostgreSQL unless this approach\ncontinued to work.\n\nIn Summary:\n(1) It's not urgent that rows always be removed as soon as possible after\nthe threshold is crossed as long as they don't often linger too awfully far\npast that limit and allow debilitating bloat.\n(2) It _is_ a problem if results inconsistent with the snapshot are\nreturned -- a \"snapshot too old\" error is necessary.\n(3) Obviously, wraparound problems need to be solved.\n\nI hope this is helpful.\n\n-- \nKevin Grittner\nVMware vCenter Server\nhttps://www.vmware.com/\n\nOn Wed, Apr 1, 2020 at 2:43 PM Andres Freund <andres@anarazel.de> wrote:The thing that makes me really worried is that the contents of the time\nmapping seem very wrong. I've reproduced query results in a REPEATABLE\nREAD transaction changing (pruned without triggering an error).That is a very big problem.  On the sort-of bright side (ironic in light of the fact that I'm a big proponent of using serializable transactions), none of the uses that I have personally seen of this feature use anything other than the default READ COMMITTED isolation level.  That might help explain the lack of complaints for those using the feature.  But yeah, I REALLY want to see a solid fix for that!  And I've\nreproduced rows not getting removed for much longer than than they\nshould, according to old_snapshot_threshold.\n\nI suspect one reason for users not noticing either is that\n\na) it's plausible that users of the feature would mostly have\n  long-running queries/transactions querying immutable or insert only\n  data. Those would not notice that, on other tables, rows are getting\n  removed, where access would not trigger the required error.\n\nb) I observe long-ish phases were no cleanup is happening (due to\n  oldSnapshotControl->head_timestamp getting updated more often than\n  correct). But if old_snapshot_threshold is small enough in relation to\n  the time the generated bloat becomes problematic, there will still be\n  occasions to actually perform cleanup.Keep in mind that the real goal of this feature is not to eagerly _see_ \"snapshot too old\" errors, but to prevent accidental debilitating bloat due to one misbehaving user connection.  This is particularly easy to see (and therefore unnervingly common) for those using ODBC, which in my experience tends to correspond to the largest companies which are using PostgreSQL.  In some cases, the snapshot which is preventing removal of the rows will never be used again; removal of the rows will not actually affect the result of any query, but only the size and performance of the database.  This is a \"soft limit\" -- kinda like max_wal_size.  Where there was a trade-off between accuracy of the limit and performance, the less accurate way was intentionally chosen.  I apologize for not making that more clear in comments.While occasional \"snapshot too old\" errors are an inconvenient side effect of achieving the primary goal, it might be of interest to know that the initial (very large corporate) user of this feature had, under Oracle, intentionally used a cursor that would be held open as long as a user chose to leave a list open for scrolling around.  They used cursor features for as long as the cursor allowed.  This could be left open for days or weeks (or longer?).  Their query ordered by a unique index, and tracked the ends of the currently displayed portion of the list so that if they happened to hit the \"snapshot too old\" error they could deallocate and restart the cursor and reposition before moving forward or back to the newly requested rows.  They were not willing to convert to PostgreSQL unless this approach continued to work.In Summary:(1) It's not urgent that rows always be removed as soon as possible after the threshold is crossed as long as they don't often linger too awfully far past that limit and allow debilitating bloat.(2) It _is_ a problem if results inconsistent with the snapshot are returned -- a \"snapshot too old\" error is necessary.(3) Obviously, wraparound problems need to be solved.I hope this is helpful.-- Kevin GrittnerVMware vCenter Serverhttps://www.vmware.com/", "msg_date": "Wed, 1 Apr 2020 15:11:52 -0500", "msg_from": "Kevin Grittner <kgrittn@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 3:43 PM Andres Freund <andres@anarazel.de> wrote:\n> The thing that makes me really worried is that the contents of the time\n> mapping seem very wrong. I've reproduced query results in a REPEATABLE\n> READ transaction changing (pruned without triggering an error). And I've\n> reproduced rows not getting removed for much longer than than they\n> should, according to old_snapshot_threshold.\n\nI think it would be a good idea to add a system view that shows the\ncontents of the mapping. We could make it a contrib module, if you\nlike, so that it can even be installed on back branches. We'd need to\nmove the structure definition from snapmgr.c to a header file, but\nthat doesn't seem like such a big deal.\n\nMaybe that contrib module could even have some functions to simulate\naging without the passage of any real time. Like, say you have a\nfunction or procedure old_snapshot_pretend_time_has_passed(integer),\nand it moves oldSnapshotControl->head_timestamp backwards by that\namount. Maybe that would require updating some other fields in\noldSnapshotControl too but it doesn't seem like we'd need to do a\nwhole lot.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Apr 2020 16:24:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 15:11:52 -0500, Kevin Grittner wrote:\n> On Wed, Apr 1, 2020 at 2:43 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > The thing that makes me really worried is that the contents of the time\n> > mapping seem very wrong. I've reproduced query results in a REPEATABLE\n> > READ transaction changing (pruned without triggering an error).\n>\n>\n> That is a very big problem. On the sort-of bright side (ironic in light of\n> the fact that I'm a big proponent of using serializable transactions), none\n> of the uses that I have personally seen of this feature use anything other\n> than the default READ COMMITTED isolation level. That might help explain\n> the lack of complaints for those using the feature. But yeah, I REALLY\n> want to see a solid fix for that!\n\nI don't think it's dependent on RR - it's just a bit easier to verify\nthat the query results are wrong that way.\n\n\n> > And I've\n> > reproduced rows not getting removed for much longer than than they\n> > should, according to old_snapshot_threshold.\n> >\n> > I suspect one reason for users not noticing either is that\n> >\n> > a) it's plausible that users of the feature would mostly have\n> > long-running queries/transactions querying immutable or insert only\n> > data. Those would not notice that, on other tables, rows are getting\n> > removed, where access would not trigger the required error.\n> >\n> > b) I observe long-ish phases were no cleanup is happening (due to\n> > oldSnapshotControl->head_timestamp getting updated more often than\n> > correct). But if old_snapshot_threshold is small enough in relation to\n> > the time the generated bloat becomes problematic, there will still be\n> > occasions to actually perform cleanup.\n> >\n>\n> Keep in mind that the real goal of this feature is not to eagerly _see_\n> \"snapshot too old\" errors, but to prevent accidental debilitating bloat due\n> to one misbehaving user connection.\n\nI don't think it's an \"intentional\" inaccuracy issue leading to\nthis. The map contents are just wrong, in particular the head_timestamp\nmost of the time is so new that\nTransactionIdLimitedForOldSnapshots(). When filling a new bucket,\nMaintainOldSnapshotThreshold() unconditionally updates\noldSnapshotControl->head_timestamp to be the current minute, which means\nit'll take old_snapshot_threshold minutes till\nTransactionIdLimitedForOldSnapshots() even looks at the mapping again.\n\nAs far as I can tell, with a large old_snapshot_threshold, it can take a\nvery long time to get to a head_timestamp that's old enough for\nTransactionIdLimitedForOldSnapshots() to do anything. Look at this\ntrace of a pgbench run with old_snapshot_threshold enabled, showing some of\nthe debugging output added in the patch upthread.\n\nThis is with a threshold of 10min, in a freshly started database:\n> 2020-04-01 13:49:00.000 PDT [1268502][2/43571:2068881994] WARNING: head 0 min: filling 1 buckets starting at 0 for whenTaken 1 min, with xmin 2068881994\n> 2020-04-01 13:49:00.000 PDT [1268502][2/43571:2068881994] WARNING: old snapshot mapping at \"after update\" with head ts: 1, current entries: 2 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 1: xid 2068447214\n> \t entry 1 (ring 1): min 2: xid 2068881994\n>\n> 2020-04-01 13:50:00.000 PDT [1268505][5/122542:0] WARNING: old snapshot mapping at \"before update\" with head ts: 1, current entries: 2 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 1: xid 2068447214\n> \t entry 1 (ring 1): min 2: xid 2068881994\n>\n> 2020-04-01 13:50:00.000 PDT [1268505][5/122542:0] WARNING: head 1 min: updating existing bucket 1 for whenTaken 2 min, with xmin 2069199511\n> 2020-04-01 13:50:00.000 PDT [1268505][5/122542:0] WARNING: old snapshot mapping at \"after update\" with head ts: 1, current entries: 2 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 1: xid 2068447214\n> \t entry 1 (ring 1): min 2: xid 2069199511\n>\n> 2020-04-01 13:51:00.000 PDT [1268502][2/202674:2069516501] WARNING: old snapshot mapping at \"before update\" with head ts: 1, current entries: 2 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 1: xid 2068447214\n> \t entry 1 (ring 1): min 2: xid 2069199511\n>\n> 2020-04-01 13:51:00.000 PDT [1268502][2/202674:2069516501] WARNING: head 1 min: filling 2 buckets starting at 0 for whenTaken 3 min, with xmin 2069516499\n> 2020-04-01 13:51:00.000 PDT [1268502][2/202674:2069516501] WARNING: old snapshot mapping at \"after update\" with head ts: 3, current entries: 4 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 3: xid 2068447214\n> \t entry 1 (ring 1): min 4: xid 2069199511\n> \t entry 2 (ring 2): min 5: xid 2069516499\n> \t entry 3 (ring 3): min 6: xid 2069516499\n> ...\n> 2020-04-01 14:03:00.000 PDT [1268504][4/1158832:2075918094] WARNING: old snapshot mapping at \"before update\" with head ts: 7, current entries: 8 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 7: xid 2068447214\n> \t entry 1 (ring 1): min 8: xid 2071112480\n> \t entry 2 (ring 2): min 9: xid 2071434473\n> \t entry 3 (ring 3): min 10: xid 2071755177\n> \t entry 4 (ring 4): min 11: xid 2072075827\n> \t entry 5 (ring 5): min 12: xid 2072395700\n> \t entry 6 (ring 6): min 13: xid 2072715464\n> \t entry 7 (ring 7): min 14: xid 2073035816\n\nBefore the mapping change the database had been running for 15\nminutes. But the mapping starts only at 7 minutes from start. And then\nis updated to\n\n> 2020-04-01 14:03:00.000 PDT [1268504][4/1158832:2075918094] WARNING: head 7 min: filling 8 buckets starting at 0 for whenTaken 15 min, with xmin 2075918093\n> 2020-04-01 14:03:00.000 PDT [1268504][4/1158832:2075918094] WARNING: old snapshot mapping at \"after update\" with head ts: 15, current entries: 16 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 15: xid 2068447214\n> \t entry 1 (ring 1): min 16: xid 2071112480\n> \t entry 2 (ring 2): min 17: xid 2071434473\n> \t entry 3 (ring 3): min 18: xid 2071755177\n> \t entry 4 (ring 4): min 19: xid 2072075827\n> \t entry 5 (ring 5): min 20: xid 2072395700\n> \t entry 6 (ring 6): min 21: xid 2072715464\n> \t entry 7 (ring 7): min 22: xid 2073035816\n> \t entry 8 (ring 8): min 23: xid 2075918093\n> \t entry 9 (ring 9): min 24: xid 2075918093\n> \t entry 10 (ring 10): min 25: xid 2075918093\n> \t entry 11 (ring 11): min 26: xid 2075918093\n> \t entry 12 (ring 12): min 27: xid 2075918093\n> \t entry 13 (ring 13): min 28: xid 2075918093\n> \t entry 14 (ring 14): min 29: xid 2075918093\n> \t entry 15 (ring 15): min 30: xid 2075918093\n\nbe considered having started in that moment. And we expand the size of\nthe mapping by 8 at the same time, filling the new buckets with the same\nxid. Despite there being a continuous workload.\n\nAfter a few more minutes we get:\n> 2020-04-01 14:07:00.000 PDT [1268503][3/1473617:2077202085] WARNING: head 15 min: updating existing bucket 4 for whenTaken 19 min, with xmin 2077202085\n> 2020-04-01 14:07:00.000 PDT [1268503][3/1473617:2077202085] WARNING: old snapshot mapping at \"after update\" with head ts: 15, current entries: 16 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 15: xid 2068447214\n> \t entry 1 (ring 1): min 16: xid 2076238895\n> \t entry 2 (ring 2): min 17: xid 2076559154\n> \t entry 3 (ring 3): min 18: xid 2076880731\n> \t entry 4 (ring 4): min 19: xid 2077202085\n> \t entry 5 (ring 5): min 20: xid 2072395700\n> \t entry 6 (ring 6): min 21: xid 2072715464\n> \t entry 7 (ring 7): min 22: xid 2073035816\n> \t entry 8 (ring 8): min 23: xid 2075918093\n> \t entry 9 (ring 9): min 24: xid 2075918093\n> \t entry 10 (ring 10): min 25: xid 2075918093\n> \t entry 11 (ring 11): min 26: xid 2075918093\n> \t entry 12 (ring 12): min 27: xid 2075918093\n> \t entry 13 (ring 13): min 28: xid 2075918093\n> \t entry 14 (ring 14): min 29: xid 2075918093\n> \t entry 15 (ring 15): min 30: xid 2075918093\n>\n\nNote how the xids are not monotonically ordered. And how IsLimited still\nwon't be able to make use of the mapping, as the head timestamp is only\n4 minutes old (whenTaken == 19 min, head == 15min).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 14:11:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> Maybe that contrib module could even have some functions to simulate\n> aging without the passage of any real time. Like, say you have a\n> function or procedure old_snapshot_pretend_time_has_passed(integer),\n> and it moves oldSnapshotControl->head_timestamp backwards by that\n> amount. Maybe that would require updating some other fields in\n> oldSnapshotControl too but it doesn't seem like we'd need to do a\n> whole lot.\n\nI like that idea. I think that I've spotted what may be an independent\nbug, but I have to wait around for a minute or two to reproduce it\neach time. Makes it hard to get to a minimal test case.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 15:00:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\r\n> I like that idea. I think that I've spotted what may be an independent\r\n> bug, but I have to wait around for a minute or two to reproduce it\r\n> each time. Makes it hard to get to a minimal test case.\r\n\r\nI now have simple steps to reproduce a bug when I start Postgres\r\nmaster with \"--old_snapshot_threshold=1\" (1 minute).\r\n\r\nThis example shows wrong answers to queries in session 2:\r\n\r\nSession 1:\r\n\r\npg@regression:5432 [1444078]=# create table snapshot_bug (col int4);\r\nCREATE TABLE\r\npg@regression:5432 [1444078]=# create index on snapshot_bug (col );\r\nCREATE INDEX\r\npg@regression:5432 [1444078]=# insert into snapshot_bug select i from\r\ngenerate_series(1, 500) i;\r\nINSERT 0 500\r\n\r\nSession 2 starts, and views the data in a serializable transaction:\r\n\r\npg@regression:5432 [1444124]=# begin isolation level serializable ;\r\nBEGIN\r\npg@regression:5432 [1444124]=*# select col from snapshot_bug where col\r\n>= 0 order by col limit 14;\r\n┌─────┐\r\n│ col │\r\n├─────┤\r\n│ 1 │\r\n│ 2 │\r\n│ 3 │\r\n│ 4 │\r\n│ 5 │\r\n│ 6 │\r\n│ 7 │\r\n│ 8 │\r\n│ 9 │\r\n│ 10 │\r\n│ 11 │\r\n│ 12 │\r\n│ 13 │\r\n│ 14 │\r\n└─────┘\r\n(14 rows)\r\n\r\nSo far so good. Now session 2 continues:\r\n\r\npg@regression:5432 [1444078]=# delete from snapshot_bug where col < 15;\r\nDELETE 14\r\n\r\nSession 1:\r\n\r\n(repeats the same \"select col from snapshot_bug where col >= 0 order\r\nby col limit 14\" query every 100 ms using psql's \\watch 0.1)\r\n\r\nSession 2:\r\n\r\npg@regression:5432 [1444078]=# vacuum snapshot_bug ;\r\nVACUUM\r\n\r\nBefore too long, we see the following over in session 2 -- the answer\r\nthe query gives changes, even though this is a serializable\r\ntransaction:\r\n\r\nWed 01 Apr 2020 03:12:59 PM PDT (every 0.1s)\r\n\r\n┌─────┐\r\n│ col │\r\n├─────┤\r\n│ 1 │\r\n│ 2 │\r\n│ 3 │\r\n│ 4 │\r\n│ 5 │\r\n│ 6 │\r\n│ 7 │\r\n│ 8 │\r\n│ 9 │\r\n│ 10 │\r\n│ 11 │\r\n│ 12 │\r\n│ 13 │\r\n│ 14 │\r\n└─────┘\r\n(14 rows)\r\n\r\nWed 01 Apr 2020 03:13:00 PM PDT (every 0.1s)\r\n\r\n┌─────┐\r\n│ col │\r\n├─────┤\r\n│ 15 │\r\n│ 16 │\r\n│ 17 │\r\n│ 18 │\r\n│ 19 │\r\n│ 20 │\r\n│ 21 │\r\n│ 22 │\r\n│ 23 │\r\n│ 24 │\r\n│ 25 │\r\n│ 26 │\r\n│ 27 │\r\n│ 28 │\r\n└─────┘\r\n(14 rows)\r\n\r\nWed 01 Apr 2020 03:13:00 PM PDT (every 0.1s)\r\n\r\n┌─────┐\r\n│ col │\r\n├─────┤\r\n│ 15 │\r\n│ 16 │\r\n│ 17 │\r\n│ 18 │\r\n│ 19 │\r\n│ 20 │\r\n│ 21 │\r\n│ 22 │\r\n│ 23 │\r\n│ 24 │\r\n│ 25 │\r\n│ 26 │\r\n│ 27 │\r\n│ 28 │\r\n└─────┘\r\n(14 rows)\r\n\r\nWe continue to get this wrong answer for almost another minute (at\r\nleast on this occasion). Eventually we get \"snapshot too old\". Note\r\nthat the answer changes when we cross the \"minute threshold\"\r\n\r\nAndres didn't explain anything to me that contributed to finding the\r\nbug (though it could be a known bug, I don't think that it is). It\r\ntook me a surprisingly short amount of time to stumble upon this bug\r\n-- I didn't find it because I have good intuitions about how to break\r\nthe feature.\r\n\r\n-- \r\nPeter Geoghegan\r\n", "msg_date": "Wed, 1 Apr 2020 15:30:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 14:11:11 -0700, Andres Freund wrote:\n> As far as I can tell, with a large old_snapshot_threshold, it can take a\n> very long time to get to a head_timestamp that's old enough for\n> TransactionIdLimitedForOldSnapshots() to do anything. Look at this\n> trace of a pgbench run with old_snapshot_threshold enabled, showing some of\n> the debugging output added in the patch upthread.\n>\n> This is with a threshold of 10min, in a freshly started database:\n> [...]\n\nI took a lot longer till the queries started to be cancelled. The last\nmapping update before that was:\n\n> 2020-04-01 14:28:00.000 PDT [1268503][3/1894126:2078853871] WARNING: old snapshot mapping at \"before update\" with head ts: 31, current entries: 20 max entries: 20, offset: 12\n> \t entry 0 (ring 12): min 31: xid 2078468128\n> \t entry 1 (ring 13): min 32: xid 2078642746\n> \t entry 2 (ring 14): min 33: xid 2078672303\n> \t entry 3 (ring 15): min 34: xid 2078700941\n> \t entry 4 (ring 16): min 35: xid 2078728729\n> \t entry 5 (ring 17): min 36: xid 2078755425\n> \t entry 6 (ring 18): min 37: xid 2078781089\n> \t entry 7 (ring 19): min 38: xid 2078805567\n> \t entry 8 (ring 0): min 39: xid 2078830065\n> \t entry 9 (ring 1): min 40: xid 2078611853\n> \t entry 10 (ring 2): min 41: xid 2078611853\n> \t entry 11 (ring 3): min 42: xid 2078611853\n> \t entry 12 (ring 4): min 43: xid 2078611853\n> \t entry 13 (ring 5): min 44: xid 2078611853\n> \t entry 14 (ring 6): min 45: xid 2078611853\n> \t entry 15 (ring 7): min 46: xid 2078611853\n> \t entry 16 (ring 8): min 47: xid 2078611853\n> \t entry 17 (ring 9): min 48: xid 2078611853\n> \t entry 18 (ring 10): min 49: xid 2078611853\n> \t entry 19 (ring 11): min 50: xid 2078611853\n>\n> 2020-04-01 14:28:00.000 PDT [1268503][3/1894126:2078853871] WARNING: head 31 min: updating existing bucket 1 for whenTaken 40 min, with xmin 2078853870\n> 2020-04-01 14:28:00.000 PDT [1268503][3/1894126:2078853871] WARNING: old snapshot mapping at \"after update\" with head ts: 31, current entries: 20 max entries: 20, offset: 12\n> \t entry 0 (ring 12): min 31: xid 2078468128\n> \t entry 1 (ring 13): min 32: xid 2078642746\n> \t entry 2 (ring 14): min 33: xid 2078672303\n> \t entry 3 (ring 15): min 34: xid 2078700941\n> \t entry 4 (ring 16): min 35: xid 2078728729\n> \t entry 5 (ring 17): min 36: xid 2078755425\n> \t entry 6 (ring 18): min 37: xid 2078781089\n> \t entry 7 (ring 19): min 38: xid 2078805567\n> \t entry 8 (ring 0 ): min 39: xid 2078830065\n> \t entry 9 (ring 1 ): min 40: xid 2078853870\n> \t entry 10 (ring 2 ): min 41: xid 2078611853\n> \t entry 11 (ring 3 ): min 42: xid 2078611853\n> \t entry 12 (ring 4 ): min 43: xid 2078611853\n> \t entry 13 (ring 5 ): min 44: xid 2078611853\n> \t entry 14 (ring 6 ): min 45: xid 2078611853\n> \t entry 15 (ring 7 ): min 46: xid 2078611853\n> \t entry 16 (ring 8 ): min 47: xid 2078611853\n> \t entry 17 (ring 9 ): min 48: xid 2078611853\n> \t entry 18 (ring 10): min 49: xid 2078611853\n> \t entry 19 (ring 11): min 50: xid 2078611853\n\n\nA query ran for fourty minutes during this, without getting cancelled.\n\n\n\nA good while later this happens:\n> 2020-04-01 15:30:00.000 PDT [1268503][3/2518699:2081262046] WARNING: old snapshot mapping at \"before update\" with head ts: 82, current entries: 20 max entries: 20, offset: 12\n> \t entry 0 (ring 12): min 82: xid 2080333207\n> \t entry 1 (ring 13): min 83: xid 2080527298\n> \t entry 2 (ring 14): min 84: xid 2080566990\n> \t entry 3 (ring 15): min 85: xid 2080605960\n> \t entry 4 (ring 16): min 86: xid 2080644554\n> \t entry 5 (ring 17): min 87: xid 2080682957\n> \t entry 6 (ring 18): min 88: xid 2080721936\n> \t entry 7 (ring 19): min 89: xid 2080760947\n> \t entry 8 (ring 0): min 90: xid 2080799843\n> \t entry 9 (ring 1): min 91: xid 2080838696\n> \t entry 10 (ring 2): min 92: xid 2080877550\n> \t entry 11 (ring 3): min 93: xid 2080915870\n> \t entry 12 (ring 4): min 94: xid 2080954151\n> \t entry 13 (ring 5): min 95: xid 2080992556\n> \t entry 14 (ring 6): min 96: xid 2081030980\n> \t entry 15 (ring 7): min 97: xid 2081069403\n> \t entry 16 (ring 8): min 98: xid 2081107811\n> \t entry 17 (ring 9): min 99: xid 2081146322\n> \t entry 18 (ring 10): min 100: xid 2081185023\n> \t entry 19 (ring 11): min 101: xid 2081223632\n>\n> 2020-04-01 15:30:00.000 PDT [1268503][3/2518699:2081262046] WARNING: head 82 min: filling 20 buckets starting at 12 for whenTaken 102 min, with xmin 2081262046\n> 2020-04-01 15:30:00.000 PDT [1268503][3/2518699:2081262046] WARNING: old snapshot mapping at \"after update\" with head ts: 102, current entries: 1 max entries: 20, offset: 0\n> \t entry 0 (ring 0): min 102: xid 2081262046\n\nThe entire mapping reset, i.e. it'll take another fourty minutes for\ncancellations to happen.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 16:09:19 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 15:30:39 -0700, Peter Geoghegan wrote:\n> On Wed, Apr 1, 2020 at 3:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I like that idea. I think that I've spotted what may be an independent\n> > bug, but I have to wait around for a minute or two to reproduce it\n> > each time. Makes it hard to get to a minimal test case.\n>\n> I now have simple steps to reproduce a bug when I start Postgres\n> master with \"--old_snapshot_threshold=1\" (1 minute).\n\nThanks, that's super helpful.\n\n\n> This example shows wrong answers to queries in session 2:\n>\n> Session 1:\n>\n> pg@regression:5432 [1444078]=# create table snapshot_bug (col int4);\n> CREATE TABLE\n> pg@regression:5432 [1444078]=# create index on snapshot_bug (col );\n> CREATE INDEX\n> pg@regression:5432 [1444078]=# insert into snapshot_bug select i from\n> generate_series(1, 500) i;\n> INSERT 0 500\n>\n> Session 2 starts, and views the data in a serializable transaction:\n>\n> pg@regression:5432 [1444124]=# begin isolation level serializable ;\n> BEGIN\n> pg@regression:5432 [1444124]=*# select col from snapshot_bug where col\n> >= 0 order by col limit 14;\n> ┌─────┐\n> │ col │\n> ├─────┤\n> │ 1 │\n> │ 2 │\n> │ 3 │\n> │ 4 │\n> │ 5 │\n> │ 6 │\n> │ 7 │\n> │ 8 │\n> │ 9 │\n> │ 10 │\n> │ 11 │\n> │ 12 │\n> │ 13 │\n> │ 14 │\n> └─────┘\n> (14 rows)\n>\n> So far so good. Now session 2 continues:\n\n> pg@regression:5432 [1444078]=# delete from snapshot_bug where col < 15;\n> DELETE 14\n\nI got a bit confused here - you seemed to have switched session 1 and 2\naround? Doesn't seem to matter much though, I was able to reproduce this.\n\nThis indeed seems a separate bug.\n\nThe backtrace to the point where the xmin horizon is affected by\nTransactionIdLimitedForOldSnapshots() is:\n\n#0 TransactionIdLimitedForOldSnapshots (recentXmin=2082816071, relation=0x7f52ff3b56f8) at /home/andres/src/postgresql/src/backend/utils/time/snapmgr.c:1870\n#1 0x00005567f4cd1a55 in heap_page_prune_opt (relation=0x7f52ff3b56f8, buffer=175) at /home/andres/src/postgresql/src/backend/access/heap/pruneheap.c:106\n#2 0x00005567f4cc70e2 in heapam_index_fetch_tuple (scan=0x5567f6db3028, tid=0x5567f6db2e40, snapshot=0x5567f6d67d68, slot=0x5567f6db1b60,\n call_again=0x5567f6db2e46, all_dead=0x7ffce13d78de) at /home/andres/src/postgresql/src/backend/access/heap/heapam_handler.c:137\n#3 0x00005567f4cdf5e6 in table_index_fetch_tuple (scan=0x5567f6db3028, tid=0x5567f6db2e40, snapshot=0x5567f6d67d68, slot=0x5567f6db1b60,\n call_again=0x5567f6db2e46, all_dead=0x7ffce13d78de) at /home/andres/src/postgresql/src/include/access/tableam.h:1020\n#4 0x00005567f4ce0767 in index_fetch_heap (scan=0x5567f6db2de0, slot=0x5567f6db1b60) at /home/andres/src/postgresql/src/backend/access/index/indexam.c:577\n#5 0x00005567f4f19191 in IndexOnlyNext (node=0x5567f6db16a0) at /home/andres/src/postgresql/src/backend/executor/nodeIndexonlyscan.c:169\n#6 0x00005567f4ef4bc4 in ExecScanFetch (node=0x5567f6db16a0, accessMtd=0x5567f4f18f20 <IndexOnlyNext>, recheckMtd=0x5567f4f1951c <IndexOnlyRecheck>)\n at /home/andres/src/postgresql/src/backend/executor/execScan.c:133\n#7 0x00005567f4ef4c39 in ExecScan (node=0x5567f6db16a0, accessMtd=0x5567f4f18f20 <IndexOnlyNext>, recheckMtd=0x5567f4f1951c <IndexOnlyRecheck>)\n at /home/andres/src/postgresql/src/backend/executor/execScan.c:182\n#8 0x00005567f4f195d4 in ExecIndexOnlyScan (pstate=0x5567f6db16a0) at /home/andres/src/postgresql/src/backend/executor/nodeIndexonlyscan.c:317\n#9 0x00005567f4ef0f71 in ExecProcNodeFirst (node=0x5567f6db16a0) at /home/andres/src/postgresql/src/backend/executor/execProcnode.c:444\n#10 0x00005567f4f1d694 in ExecProcNode (node=0x5567f6db16a0) at /home/andres/src/postgresql/src/include/executor/executor.h:245\n#11 0x00005567f4f1d7d2 in ExecLimit (pstate=0x5567f6db14b8) at /home/andres/src/postgresql/src/backend/executor/nodeLimit.c:95\n#12 0x00005567f4ef0f71 in ExecProcNodeFirst (node=0x5567f6db14b8) at /home/andres/src/postgresql/src/backend/executor/execProcnode.c:444\n#13 0x00005567f4ee57c3 in ExecProcNode (node=0x5567f6db14b8) at /home/andres/src/postgresql/src/include/executor/executor.h:245\n#14 0x00005567f4ee83dd in ExecutePlan (estate=0x5567f6db1280, planstate=0x5567f6db14b8, use_parallel_mode=false, operation=CMD_SELECT, sendTuples=true,\n numberTuples=0, direction=ForwardScanDirection, dest=0x5567f6db3c78, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:1646\n#15 0x00005567f4ee5e23 in standard_ExecutorRun (queryDesc=0x5567f6d0c490, direction=ForwardScanDirection, count=0, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:364\n#16 0x00005567f4ee5c35 in ExecutorRun (queryDesc=0x5567f6d0c490, direction=ForwardScanDirection, count=0, execute_once=true)\n at /home/andres/src/postgresql/src/backend/executor/execMain.c:308\n#17 0x00005567f510c4de in PortalRunSelect (portal=0x5567f6d49260, forward=true, count=0, dest=0x5567f6db3c78)\n at /home/andres/src/postgresql/src/backend/tcop/pquery.c:912\n#18 0x00005567f510c191 in PortalRun (portal=0x5567f6d49260, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x5567f6db3c78,\n altdest=0x5567f6db3c78, qc=0x7ffce13d7de0) at /home/andres/src/postgresql/src/backend/tcop/pquery.c:756\n#19 0x00005567f5106015 in exec_simple_query (query_string=0x5567f6cdd7a0 \"select col from snapshot_bug where col >= 0 order by col limit 14;\")\n at /home/andres/src/postgresql/src/backend/tcop/postgres.c:1239\n\nwhich in my tree is the elog() in the block below:\n\t\tif (!same_ts_as_threshold)\n\t\t{\n\t\t\tif (ts == update_ts)\n\t\t\t{\n\t\t\t\tPrintOldSnapshotMapping(\"non cached limit via update_ts\", false);\n\n\t\t\t\txlimit = latest_xmin;\n\t\t\t\tif (NormalTransactionIdFollows(xlimit, recentXmin))\n\t\t\t\t{\n\t\t\t\t\telog(LOG, \"increasing threshold from %u to %u (via update_ts)\",\n\t\t\t\t\t\t recentXmin, xlimit);\n\t\t\t\t\tSetOldSnapshotThresholdTimestamp(ts, xlimit);\n\t\t\t\t}\n\t\t\t}\n\nthe mapping at that point is:\n\n2020-04-01 16:14:00.025 PDT [1272381][4/2:0] WARNING: old snapshot mapping at \"non cached limit via update_ts\" with head ts: 1, current entries: 2 max entries: 11, offset: 0\n\t entry 0 (ring 0): min 1: xid 2082816067\n\t entry 1 (ring 1): min 2: xid 2082816071\n\nand the xmin changed is:\n2020-04-01 16:14:00.026 PDT [1272381][4/2:0] LOG: increasing threshold from 2082816071 to 2082816072 (via update_ts)\n\nin the frame of heap_prune_page_opt():\n(rr) p snapshot->whenTaken\n$5 = 639097973135655\n(rr) p snapshot->lsn\n$6 = 133951784192\n(rr) p MyPgXact->xmin\n$7 = 2082816071\n(rr) p BufferGetBlockNumber(buffer)\n$11 = 0\n\nin the frame for TransactionIdLimitedForOldSnapshots:\n(rr) p ts\n$8 = 639098040000000\n(rr) p latest_xmin\n$9 = 2082816072\n(rr) p update_ts\n$10 = 639098040000000\n\n\nThe primary issue here is that there is no TestForOldSnapshot() in\nheap_hot_search_buffer(). Therefore index fetches will never even try to\ndetect that tuples it needs actually have already been pruned away.\n\n\nThe wrong queries I saw took longer to reproduce, so I've not been able\nto debug the precise reasons.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 16:59:51 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 16:59:51 -0700, Andres Freund wrote:\n> The primary issue here is that there is no TestForOldSnapshot() in\n> heap_hot_search_buffer(). Therefore index fetches will never even try to\n> detect that tuples it needs actually have already been pruned away.\n\nFWIW, with autovacuum=off the query does not get killed until a manual\nvacuum, nor if fewer rows are deleted and the table has previously been\nvacuumed.\n\nThe vacuum in the second session isn't required. There just needs to be\nsomething consuming an xid, so that oldSnapshotControl->latest_xmin is\nincreased. A single SELECT txid_current(); or such in a separate session\nis sufficient.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:17:21 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> Thanks, that's super helpful.\n\nGlad I could help.\n\n> I got a bit confused here - you seemed to have switched session 1 and 2\n> around? Doesn't seem to matter much though, I was able to reproduce this.\n\nYeah, I switched the session numbers because I was in a hurry. Sorry about that.\n\nAs you have already worked out, one session does all the DDL and\ninitial loading of data, while the other session queries the data\nrepeatedly in a serializable (or RR) xact. The latter session exhibits\nthe bug.\n\n> This indeed seems a separate bug.\n\n> The primary issue here is that there is no TestForOldSnapshot() in\n> heap_hot_search_buffer(). Therefore index fetches will never even try to\n> detect that tuples it needs actually have already been pruned away.\n\nI suspected that heap_hot_search_buffer() was missing something.\n\n> The wrong queries I saw took longer to reproduce, so I've not been able\n> to debug the precise reasons.\n\nHow hard would it be to write a debug patch that reduces the quantum\nused in places like TransactionIdLimitedForOldSnapshots() to something\nmuch less than the current 1 minute quantum? That made reproducing the\nbug *very* tedious.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:26:04 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 16:59:51 -0700, Andres Freund wrote:\n> The primary issue here is that there is no TestForOldSnapshot() in\n> heap_hot_search_buffer(). Therefore index fetches will never even try to\n> detect that tuples it needs actually have already been pruned away.\n\nbitmap heap scan doesn't have the necessary checks either. In the\nnon-lossy case it uses heap_hot_search_buffer, for the lossy case it has\nan open coded access without the check (that's bitgetpage() before v12,\nand heapam_scan_bitmap_next_block() after that).\n\nNor do sample scans, but that was \"at least\" introduced later.\n\nAs far as I can tell there's not sufficient in-tree explanation of when\ncode needs to test for an old snapshot. There's just the following\ncomment above TestForOldSnapshot():\n * Check whether the given snapshot is too old to have safely read the given\n * page from the given table. If so, throw a \"snapshot too old\" error.\n *\n * This test generally needs to be performed after every BufferGetPage() call\n * that is executed as part of a scan. It is not needed for calls made for\n * modifying the page (for example, to position to the right place to insert a\n * new index tuple or for vacuuming). It may also be omitted where calls to\n * lower-level functions will have already performed the test.\n\nBut I don't find \"as part of a scan\" very informative. I mean, it\nwas explicitly not called from with the executor back then (for a while\nthe check was embedded in BufferGetPage()):\n\nstatic void\nbitgetpage(HeapScanDesc scan, TBMIterateResult *tbmres)\n...\n\t\tPage\t\tdp = BufferGetPage(buffer, NULL, NULL, BGP_NO_SNAPSHOT_TEST);\n\n\nI am more than a bit dumbfounded here.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 17:54:06 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 5:54 PM Andres Freund <andres@anarazel.de> wrote:\n> As far as I can tell there's not sufficient in-tree explanation of when\n> code needs to test for an old snapshot. There's just the following\n> comment above TestForOldSnapshot():\n> * Check whether the given snapshot is too old to have safely read the given\n> * page from the given table. If so, throw a \"snapshot too old\" error.\n> *\n> * This test generally needs to be performed after every BufferGetPage() call\n> * that is executed as part of a scan. It is not needed for calls made for\n> * modifying the page (for example, to position to the right place to insert a\n> * new index tuple or for vacuuming). It may also be omitted where calls to\n> * lower-level functions will have already performed the test.\n>\n> But I don't find \"as part of a scan\" very informative.\n\nI also find it strange that _bt_search() calls TestForOldSnapshot() on\nevery level on the tree (actually, it calls _bt_moveright() which\ncalls it on every level of the tree). At least with reads (see the\ncomments at the top of _bt_moveright()).\n\nWhy do we need to do the test on internal pages? We only ever call\nPredicateLockPage() on a leaf nbtree page. Why the inconsistency\nbetween the two similar-seeming cases?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Apr 2020 18:04:15 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 17:54:06 -0700, Andres Freund wrote:\n> * Check whether the given snapshot is too old to have safely read the given\n> * page from the given table. If so, throw a \"snapshot too old\" error.\n> *\n> * This test generally needs to be performed after every BufferGetPage() call\n> * that is executed as part of a scan. It is not needed for calls made for\n> * modifying the page (for example, to position to the right place to insert a\n> * new index tuple or for vacuuming). It may also be omitted where calls to\n> * lower-level functions will have already performed the test.\n\nTo me this sounds like we'd not need to check for an old snapshot in\nheap_delete/update/lock_tuple. And they were explictly not testing for\nold snapshots. But I don't understand why that'd be correct?\n\nIn a lot of UPDATE/DELETE queries there's no danger that the target\ntuple will be pruned away, because the underlying scan node will hold a\npin. But I don't think that's guaranteed. E.g. if a tidscan is below the\nModifyTable node, it will not hold a pin by the time we heap_update,\nbecause there's no scan holding a pin, and the slot will have been\nmaterialized before updating.\n\nThere are number of other ways, I think.\n\n\nSo it's possible to get to heap_update/delete (and probably lock_tuple\nas well) with a tid that's already been pruned away. Neither contains a\nnon-assert check ensuring the tid still is normal.\n\nWith assertions we'd fail with an assertion in PageGetItem(). But\nwithout it looks like we'll interpret the page header as a tuple. Which\ncan't be good.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 1 Apr 2020 18:46:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 6:59 PM Andres Freund <andres@anarazel.de> wrote:\n\n> index fetches will never even try to\n> detect that tuples it needs actually have already been pruned away.\n>\n\nI looked at this flavor of problem today and from what I saw:\n\n(1) This has been a problem all the way back to 9.6.0.\n(2) The behavior is correct if the index creation is skipped or if\nenable_indexscan is turned off in the transaction, confirming Andres'\nanalysis.\n(3) Pruning seems to happen as intended; the bug found by Peter seems to be\nentirely about failing to TestForOldSnapshot() where needed.\n\n-- \nKevin Grittner\nVMware vCenter Server\nhttps://www.vmware.com/\n\nOn Wed, Apr 1, 2020 at 6:59 PM Andres Freund <andres@anarazel.de> wrote:index fetches will never even try to\ndetect that tuples it needs actually have already been pruned away.I looked at this flavor of problem today and from what I saw:(1) This has been a problem all the way back to 9.6.0.(2) The behavior is correct if the index creation is skipped or if enable_indexscan is turned off in the transaction, confirming Andres' analysis.(3) Pruning seems to happen as intended; the bug found by Peter seems to be entirely about failing to TestForOldSnapshot() where needed.-- Kevin GrittnerVMware vCenter Serverhttps://www.vmware.com/", "msg_date": "Thu, 2 Apr 2020 11:05:14 -0500", "msg_from": "Kevin Grittner <kgrittn@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 7:17 PM Andres Freund <andres@anarazel.de> wrote:\n\n> FWIW, with autovacuum=off the query does not get killed until a manual\n> vacuum, nor if fewer rows are deleted and the table has previously been\n> vacuumed.\n>\n> The vacuum in the second session isn't required. There just needs to be\n> something consuming an xid, so that oldSnapshotControl->latest_xmin is\n> increased. A single SELECT txid_current(); or such in a separate session\n> is sufficient.\n>\n\nAgreed. I don't see that part as a problem; if no xids are being consumed,\nit's hard to see how we could be heading into debilitating levels of bloat,\nso there is no need to perform the early pruning. It would not be worth\nconsuming any cycles to ensure that pruning happens sooner than it does in\nthis case. It's OK for it to happen any time past the moment that the\nsnapshot hits the threshold, but it's also OK for it to wait until a vacuum\nof the table or until some activity consumes an xid.\n\n-- \nKevin Grittner\nVMware vCenter Server\nhttps://www.vmware.com/\n\nOn Wed, Apr 1, 2020 at 7:17 PM Andres Freund <andres@anarazel.de> wrote:FWIW, with autovacuum=off the query does not get killed until a manual\nvacuum, nor if fewer rows are deleted and the table has previously been\nvacuumed.\n\nThe vacuum in the second session isn't required. There just needs to be\nsomething consuming an xid, so that oldSnapshotControl->latest_xmin is\nincreased. A single SELECT txid_current(); or such in a separate session\nis sufficient.Agreed.  I don't see that part as a problem; if no xids are being consumed, it's hard to see how we could be heading into debilitating levels of bloat, so there is no need to perform the early pruning.  It would not be worth consuming any cycles to ensure that pruning happens sooner than it does in this case.  It's OK for it to happen any time past the moment that the snapshot hits the threshold, but it's also OK for it to wait until a vacuum of the table or until some activity consumes an xid.-- Kevin GrittnerVMware vCenter Serverhttps://www.vmware.com/", "msg_date": "Thu, 2 Apr 2020 11:36:32 -0500", "msg_from": "Kevin Grittner <kgrittn@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi, \n\nOn April 2, 2020 9:36:32 AM PDT, Kevin Grittner <kgrittn@gmail.com> wrote:\n>On Wed, Apr 1, 2020 at 7:17 PM Andres Freund <andres@anarazel.de>\n>wrote:\n>\n>> FWIW, with autovacuum=off the query does not get killed until a\n>manual\n>> vacuum, nor if fewer rows are deleted and the table has previously\n>been\n>> vacuumed.\n>>\n>> The vacuum in the second session isn't required. There just needs to\n>be\n>> something consuming an xid, so that oldSnapshotControl->latest_xmin\n>is\n>> increased. A single SELECT txid_current(); or such in a separate\n>session\n>> is sufficient.\n>>\n>\n>Agreed. I don't see that part as a problem; if no xids are being\n>consumed,\n>it's hard to see how we could be heading into debilitating levels of\n>bloat,\n>so there is no need to perform the early pruning. It would not be\n>worth\n>consuming any cycles to ensure that pruning happens sooner than it does\n>in\n>this case. It's OK for it to happen any time past the moment that the\n>snapshot hits the threshold, but it's also OK for it to wait until a\n>vacuum\n>of the table or until some activity consumes an xid.\n\nThe point about txid being sufficient was just about simplifying the reproducer for wrong query results.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Thu, 02 Apr 2020 09:40:15 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Mar 31, 2020 at 11:40 PM Andres Freund <andres@anarazel.de> wrote:\n> The problem, as far as I can tell, is that\n> oldSnapshotControl->head_timestamp appears to be intended to be the\n> oldest value in the ring. But we update it unconditionally in the \"need\n> a new bucket, but it might not be the very next one\" branch of\n> MaintainOldSnapshotTimeMapping().\n>\n> While there's not really a clear-cut comment explaining whether\n> head_timestamp() is intended to be the oldest or the newest timestamp,\n> it seems to me that the rest of the code treats it as the \"oldest\"\n> timestamp.\n\nAt first, I was almost certain that it's supposed to be the oldest\nbased only on the OldSnapshotControlData struct fields themselves. It\nseemed pretty unambiguous:\n\n int head_offset; /* subscript of oldest tracked time */\n TimestampTz head_timestamp; /* time corresponding to head xid */\n\n(Another thing that supports this interpretation is the fact that\nthere is a separate current_timestamp latest timestamp field in\nOldSnapshotControlData.)\n\nBut then I took another look at the \"We need a new bucket, but it\nmight not be the very next one\" branch. It does indeed seem to\ndirectly contradict the OldSnapshotControlData comments/documentation.\nNote just the code itself, either. Even comments from this \"new\nbucket\" branch disagree with the OldSnapshotControlData comments:\n\n if (oldSnapshotControl->count_used ==\nOLD_SNAPSHOT_TIME_MAP_ENTRIES)\n {\n /* Map full and new value replaces old head. */\n int old_head = oldSnapshotControl->head_offset;\n\n if (old_head == (OLD_SNAPSHOT_TIME_MAP_ENTRIES - 1))\n oldSnapshotControl->head_offset = 0;\n else\n oldSnapshotControl->head_offset = old_head + 1;\n oldSnapshotControl->xid_by_minute[old_head] = xmin;\n }\n\nHere, the comment says the map (circular buffer) is full, and that we\nmust replace the current head with a *new* value/timestamp (the one we\njust got in GetSnapshotData()). It looks as if the design of the data\nstructure changed during the development of the original patch, but\nthis entire branch was totally overlooked.\n\nIn conclusion, I share Andres' concerns here. There are glaring\nproblems with how we manipulate the data structure that controls the\neffective horizon for pruning. Maybe they can be fixed while leaving\nthe code that manages the OldSnapshotControl circular buffer in\nsomething resembling its current form, but I doubt it. In my opinion,\nthere is no approach to fixing \"snapshot too old\" that won't have some\nserious downside.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Apr 2020 11:28:06 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Thu, Apr 2, 2020 at 11:28 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> In conclusion, I share Andres' concerns here. There are glaring\n> problems with how we manipulate the data structure that controls the\n> effective horizon for pruning. Maybe they can be fixed while leaving\n> the code that manages the OldSnapshotControl circular buffer in\n> something resembling its current form, but I doubt it. In my opinion,\n> there is no approach to fixing \"snapshot too old\" that won't have some\n> serious downside.\n\nI'll add something that might be constructive: It would probably be a\ngood idea to introduce a function like syncrep.c's\nSyncRepQueueIsOrderedByLSN() function, which is designed to be called\nby assertions only. That would both clearly document and actually\nverify the circular buffer/OldSnapshotControl data structure's\ninvariants.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Apr 2020 13:04:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nI just spend a good bit more time improving my snapshot patch, so it\ncould work well with a fixed version of the old_snapshot_threshold\nfeature. Mostly so there's no unnecessary dependency on the resolution\nof the issues in that patch.\n\n\nWhen testing my changes, for quite a while, I could not get\nsrc/test/modules/snapshot_too_old/ to trigger a single too-old error.\n\nIt turns out, that's because there's not a single tuple removed due to\nold_snapshot_threshold in src/test/modules/snapshot_too_old/. The only\nreason the current code triggers any such errors is that\n\na) TransactionIdLimitedForOldSnapshots() is always called in\n heap_page_prune_opt(), even if the \"not limited\" horizon\n (i.e. RecentGlobalDataXmin) is more than old enough to allow for\n pruning. That includes pages without a pd_prune_xid.\n\nb) TransactionIdLimitedForOldSnapshots(), in the old_snapshot_threshold\n == 0 branch, always calls\n SetOldSnapshotThresholdTimestamp(ts, xlimit)\n even if the horizon has not changed due to snapshot_too_old (xlimit\n is initially set tot the \"input\" horizon, and only increased if\n between (recentXmin, MyProc->xmin)).\n\n\nTo benefit from the snapshot scalability improvements in my patchset, it\nis important to avoid unnecessarily determining the \"accurate\" xmin\nhorizon, if it's clear from the \"lower boundary\" horizon that pruning\ncan happen. Therefore I changed heap_page_prune_opt() and\nheap_page_prune() to only limit when we couldn't prune.\n\nIn the course of that I separated getting the horizon from\nTransactionIdLimitedForOldSnapshots() and triggering errors when an\nalready removed tuple would be needed via\nTransactionIdLimitedForOldSnapshots().\n\nBecause there are no occasions to actually remove tuples in the entire\ntest, there now were no TransactionIdLimitedForOldSnapshots() calls. And\nthus no errors. My code turns out to actually work.\n\n\nThus, if I change the code in master from:\n\t\tTransactionId xlimit = recentXmin;\n...\n\t\tif (old_snapshot_threshold == 0)\n\t\t{\n\t\t\tif (TransactionIdPrecedes(latest_xmin, MyPgXact->xmin)\n\t\t\t\t&& TransactionIdFollows(latest_xmin, xlimit))\n\t\t\t\txlimit = latest_xmin;\n\n\t\t\tts -= 5 * USECS_PER_SEC;\n\t\t\tSetOldSnapshotThresholdTimestamp(ts, xlimit);\n\n\t\t\treturn xlimit;\n\t\t}\n\nto\n...\n\t\tif (old_snapshot_threshold == 0)\n\t\t{\n\t\t\tif (TransactionIdPrecedes(latest_xmin, MyPgXact->xmin)\n\t\t\t\t&& TransactionIdFollows(latest_xmin, xlimit))\n\t\t\t{\n\t\t\t\txlimit = latest_xmin;\n\t\t\t\tSetOldSnapshotThresholdTimestamp(ts, xlimit);\n\t\t\t}\n\n\t\t\tts -= 5 * USECS_PER_SEC;\n\n\t\t\treturn xlimit;\n\t\t}\n\nthere's not a single error raised in the existing tests. Not a *single*\ntuple removal is caused by old_snapshot_threshold. We just test the\norder of SetOldSnapshotThresholdTimestamp() calls. We have code in the\nbackend to support testing old_snapshot_threshold, but we don't test\nanything meaningful in the feature. We basically test a oddly behaving\nversion version of \"transaction_timeout = 5s\". I can't emphasize enough\nhow baffling I find this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Apr 2020 17:12:35 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-01 12:02:18 -0400, Robert Haas wrote:\n> I have no objection to the idea that *if* the feature is hopelessly\n> broken, it should be removed.\n\nI don't think we have a real choice here at this point, at least for the\nback branches.\n\nJust about nothing around old_snapshot_threshold works correctly:\n\n* There are basically no tests (see [1] I jsut sent, and also\n old_snapshot_threshold bypassing a lot of the relevant code).\n\n* We don't detect errors after hot pruning (to allow that is a major\n point of the feature) when access is via any sort of index\n scans. Wrong query results.\n\n* The time->xid mapping is is entirely broken. We don't prevent bloat\n for many multiples of old_snapshot_threshold (if above 1min).\n\n It's possible, but harder, to have this cause wrong query results.\n\n* In read-mostly workloads it can trigger errors in sessions that are\n much younger than old_snapshot_threshold, if the transactionid is not\n advancing.\n\n I've not tried to reproduce, but I suspect this can also cause wrong\n query results. Because a later snapshot can have the same xmin as\n older transactions, it sure looks like we can end up with data from an\n older xmin getting removed, but the newer snapshot's whenTaken will\n prevent TestForOldSnapshot_impl() from raising an error.\n\n* I am fairly sure that it can cause crashes (or even data corruption),\n because it assumes that DML never needs to check for old snapshots\n (with no meaningful justificiation). Leading to heap_update/delete to\n assume the page header is a tuple.\n\n* There's obviously also the wraparound issue that made me start this\n thread initially.\n\nSince this is a feature that can result in wrong query results (and\nquite possibly crashes / data corruption), I don't think we can just\nleave this unfixed. But given the amount of code / infrastructure\nchanges required to get this into a working feature, I don't see how we\ncan unleash those changes onto the stable branches.\n\nThere's quite a few issues in here that require not just local bugfixes,\nbut some design changes too. And it's pretty clear that the feature\ndidn't go through enough review before getting committed. I see quite\nsome merit in removing the code in master, and having a potential\nreimplementation go through a normal feature integration process.\n\n\nI don't really know what to do here. Causing problems by neutering a\nfeature in the back branch *sucks*. While not quite as bad, removing a\nfeature without a replacement in a major release is pretty harsh\ntoo. But I don't really see any other realistic path forward.\n\n\nFWIW, I've now worked around the interdependency of s_t_o my snapshot\nscalability patch (only took like 10 days). I have manually confirmed it\nworks with 0/1 minute thresholds. I can make the tests pass unmodified\nif I just add SetOldSnapshotThresholdTimestamp() calls when not\nnecessary (which obviously makes no sense). Lead to some decent\nimprovements around pruning that are independent of s_t_o (with more\npossibilities \"opened\"). But I still think we need to do something\nhere.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20200403001235.e6jfdll3gh2ygbuc%40alap3.anarazel.de\n\n\n", "msg_date": "Thu, 2 Apr 2020 17:17:13 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Thu, Apr 2, 2020 at 5:17 PM Andres Freund <andres@anarazel.de> wrote:\n> Since this is a feature that can result in wrong query results (and\n> quite possibly crashes / data corruption), I don't think we can just\n> leave this unfixed. But given the amount of code / infrastructure\n> changes required to get this into a working feature, I don't see how we\n> can unleash those changes onto the stable branches.\n\nI don't think that the feature can be allowed to remain in anything\nlike its current form. The current design is fundamentally unsound.\n\n> I don't really know what to do here. Causing problems by neutering a\n> feature in the back branch *sucks*. While not quite as bad, removing a\n> feature without a replacement in a major release is pretty harsh\n> too. But I don't really see any other realistic path forward.\n\nI have an idea that might allow us to insulate some users from the\nproblem caused by a full revert (or disabling the feature) in the\nbackbranches. I wouldn't usually make such a radical suggestion, but\nthe current situation is exceptional. Anything that avoids serious\npain for users deserves to be considered.\n\nKevin said this about the feature very recently:\n\n\"\"\"\nKeep in mind that the real goal of this feature is not to eagerly\n_see_ \"snapshot too old\" errors, but to prevent accidental\ndebilitating bloat due to one misbehaving user connection. This is\nparticularly easy to see (and therefore unnervingly common) for those\nusing ODBC, which in my experience tends to correspond to the largest\ncompanies which are using PostgreSQL. In some cases, the snapshot\nwhich is preventing removal of the rows will never be used again;\nremoval of the rows will not actually affect the result of any query,\nbut only the size and performance of the database. This is a \"soft\nlimit\" -- kinda like max_wal_size. Where there was a trade-off\nbetween accuracy of the limit and performance, the less accurate way\nwas intentionally chosen. I apologize for not making that more clear\nin comments.\n\"\"\"\n\nODBC uses cursors in rather strange ways, often to implement a kind of\nODBC-level cache. See the description of \"Use Declare/Fetch\" from\nhttps://odbc.postgresql.org/docs/config.html to get some idea of what\nthis can look like.\n\nI think that it's worth considering whether or not there are a\nsignificant number of \"snapshot too old\" users that rarely or never\nrely on old snapshots used by new queries. Kevin said that this\nhappens \"in some cases\", but how many cases? Might it be that many\n\"snapshot too old\" users could get by with a version of the feature\nthat makes the most conservative possible assumptions, totally giving\nup on the idea of differentiating which blocks are truly safe to\naccess with an \"old\" snapshot? (In other words, one that assumes that\nthey're *all* unsafe for an \"old\" snapshot.)\n\nI'm thinking of a version of \"snapshot too old\" that amounts to a\nstatement timeout that gets applied for xmin horizon type purposes in\nthe conventional way, while only showing an error to the client if and\nwhen they access literally any buffer (though not when the relation is\na system catalog). Is it possible that something along those lines is\nappreciably better than nothing to users? If it is, and if we can find\na way to manage the transition, then maybe we could tolerate\nsupporting this greatly simplified implementation of \"snapshot too\nold\".\n\nI feel slightly silly for even suggesting this. I have to ask. Maybe\nnobody noticed a problem with the feature before now (at least in\npart) because they didn't truly care about old snapshots anyway. They\njust wanted to avoid a significant impact from buggy code that leaks\ncursors and things like that. Or, they were happy as long as they\ncould still access ODBC's \"100 rows in a cache\" through the cursor.\nThe docs say that a old_snapshot_threshold setting in the hours is\nabout the lowest reasonable setting for production use, which seems\nrather high to me. It almost seems as if the feature specifically\ntargets misbehaving applications already.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 2 Apr 2020 18:21:31 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Fri, Apr 3, 2020 at 6:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Thu, Apr 2, 2020 at 5:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > Since this is a feature that can result in wrong query results (and\n> > quite possibly crashes / data corruption), I don't think we can just\n> > leave this unfixed. But given the amount of code / infrastructure\n> > changes required to get this into a working feature, I don't see how we\n> > can unleash those changes onto the stable branches.\n>\n\nAs per my initial understanding, the changes required are (a) There\nseem to be multiple places where TestForOldSnapshot is missing, (b)\nTestForOldSnapshot itself need to be reviewed carefully to see if it\nhas problems, (c) Some of the members of OldSnapshotControlData like\nhead_timestamp and xid_by_minute are not maintained accurately, (d)\nhandling of wraparound for xids in the in-memory data-structure for\nthis feature is required, (e) test infrastructure is not good enough\nto catch bugs or improve this feature.\n\nNow, this sounds like a quite of work but OTOH, if we see most of the\ncritical changes required will be in only a few functions like\nTransactionIdLimitedForOldSnapshots(),\nMaintainOldSnapshotTimeMapping(), TestForOldSnapshot(). I don't deny\nthe possibility that we might need much more work or we need to come\nup with quite a different design to address all these problems but\nunless Kevin or someone else doesn't come up with a solution to\naddress all of these problems, we can't be sure of that.\n\n> I don't think that the feature can be allowed to remain in anything\n> like its current form. The current design is fundamentally unsound.\n>\n\nAgreed, but OTOH, not giving time to Kevin or others who might be\ninterested to support this work is also not fair. I think once\nsomebody comes up with patches for problems we can decide whether this\nfeature can be salvaged in back-branches or we need to disable it in a\nhard-way. Now, if Kevin himself is not interested in fixing or nobody\nshows up to help, then surely we can take the decision sooner but\ngiving time for a couple of weeks (or even till we are near for PG13\nrelease) in this case doesn't seem like a bad idea.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Apr 2020 14:32:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-03 14:32:09 +0530, Amit Kapila wrote:\n> On Fri, Apr 3, 2020 at 6:52 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> >\n> > On Thu, Apr 2, 2020 at 5:17 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Since this is a feature that can result in wrong query results (and\n> > > quite possibly crashes / data corruption), I don't think we can just\n> > > leave this unfixed. But given the amount of code / infrastructure\n> > > changes required to get this into a working feature, I don't see how we\n> > > can unleash those changes onto the stable branches.\n> >\n>\n> As per my initial understanding, the changes required are (a) There\n> seem to be multiple places where TestForOldSnapshot is missing, (b)\n> TestForOldSnapshot itself need to be reviewed carefully to see if it\n> has problems, (c) Some of the members of OldSnapshotControlData like\n> head_timestamp and xid_by_minute are not maintained accurately, (d)\n> handling of wraparound for xids in the in-memory data-structure for\n> this feature is required, (e) test infrastructure is not good enough\n> to catch bugs or improve this feature.\n\nAnd a bunch more correctness issues. But basically, yes.\n\nWhen you say \"(c) Some of the members of OldSnapshotControlData like\nhead_timestamp and xid_by_minute are not maintained accurately)\" - note\nthat that's the core state for the whole feature.\n\nWith regards to test: \"not good enough\" is somewhat of an\nunderstatement. Not a *single* tuple is removed in the tests due to\nold_snapshot_threshold - and removing tuples is the entire point.\n\n\n> Now, this sounds like a quite of work but OTOH, if we see most of the\n> critical changes required will be in only a few functions like\n> TransactionIdLimitedForOldSnapshots(),\n> MaintainOldSnapshotTimeMapping(), TestForOldSnapshot().\n\nI don't think that's really the case. Every place reading a buffer needs\nto be inspected, and new calls added. They aren't free, and I'm not sure\nall of them have the relevant snapshot available. To fix the issue of\nspurious errors, we'd likely need changes outside of those, and it'd\nquite possibly have performance / bloat implications.\n\n\n> I don't deny the possibility that we might need much more work or we\n> need to come up with quite a different design to address all these\n> problems but unless Kevin or someone else doesn't come up with a\n> solution to address all of these problems, we can't be sure of that.\n>\n> > I don't think that the feature can be allowed to remain in anything\n> > like its current form. The current design is fundamentally unsound.\n> >\n>\n> Agreed, but OTOH, not giving time to Kevin or others who might be\n> interested to support this work is also not fair. I think once\n> somebody comes up with patches for problems we can decide whether this\n> feature can be salvaged in back-branches or we need to disable it in a\n> hard-way. Now, if Kevin himself is not interested in fixing or nobody\n> shows up to help, then surely we can take the decision sooner but\n> giving time for a couple of weeks (or even till we are near for PG13\n> release) in this case doesn't seem like a bad idea.\n\nIt'd certainly be great if somebody came up with fixes, yes. Even if we\nhad to disable it in the back branches, that'd allow us to keep the\nfeature around, at least.\n\nThe likelihood of regressions even when the feature is not on does not\nseem that low. But you're right, we'll be able to better judge it with\nfixes to look at.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 3 Apr 2020 12:03:09 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Sat, Apr 4, 2020 at 12:33 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2020-04-03 14:32:09 +0530, Amit Kapila wrote:\n> >\n> > Agreed, but OTOH, not giving time to Kevin or others who might be\n> > interested to support this work is also not fair. I think once\n> > somebody comes up with patches for problems we can decide whether this\n> > feature can be salvaged in back-branches or we need to disable it in a\n> > hard-way. Now, if Kevin himself is not interested in fixing or nobody\n> > shows up to help, then surely we can take the decision sooner but\n> > giving time for a couple of weeks (or even till we are near for PG13\n> > release) in this case doesn't seem like a bad idea.\n>\n> It'd certainly be great if somebody came up with fixes, yes. Even if we\n> had to disable it in the back branches, that'd allow us to keep the\n> feature around, at least.\n>\n> The likelihood of regressions even when the feature is not on does not\n> seem that low.\n>\n\nYeah, that is the key point. IIRC, when this feature got added Kevin\nand others spent a lot of effort to ensure that point.\n\n> But you're right, we'll be able to better judge it with\n> fixes to look at.\n>\n\nI am hoping Kevin would take the lead and then others also can help.\nKevin, please do let us know if you are *not* planning to work on the\nissues raised in this thread so that we can think of an alternative?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 4 Apr 2020 14:27:56 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Fri, Apr 3, 2020 at 2:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that it's worth considering whether or not there are a\n> significant number of \"snapshot too old\" users that rarely or never\n> rely on old snapshots used by new queries. Kevin said that this\n> happens \"in some cases\", but how many cases? Might it be that many\n> \"snapshot too old\" users could get by with a version of the feature\n> that makes the most conservative possible assumptions, totally giving\n> up on the idea of differentiating which blocks are truly safe to\n> access with an \"old\" snapshot? (In other words, one that assumes that\n> they're *all* unsafe for an \"old\" snapshot.)\n>\n> I'm thinking of a version of \"snapshot too old\" that amounts to a\n> statement timeout that gets applied for xmin horizon type purposes in\n> the conventional way, while only showing an error to the client if and\n> when they access literally any buffer (though not when the relation is\n> a system catalog). Is it possible that something along those lines is\n> appreciably better than nothing to users? If it is, and if we can find\n> a way to manage the transition, then maybe we could tolerate\n> supporting this greatly simplified implementation of \"snapshot too\n> old\".\n\nHi Peter,\n\nInteresting idea. I'm keen to try prototyping it to see how well it\nworks out it practice. Let me know soon if you already have designs\non that and I'll get out of your way, otherwise I'll give it a try and\nshare what I come up with.\n\n\n", "msg_date": "Mon, 13 Apr 2020 14:58:34 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2020-04-13 14:58:34 +1200, Thomas Munro wrote:\n> On Fri, Apr 3, 2020 at 2:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think that it's worth considering whether or not there are a\n> > significant number of \"snapshot too old\" users that rarely or never\n> > rely on old snapshots used by new queries. Kevin said that this\n> > happens \"in some cases\", but how many cases? Might it be that many\n> > \"snapshot too old\" users could get by with a version of the feature\n> > that makes the most conservative possible assumptions, totally giving\n> > up on the idea of differentiating which blocks are truly safe to\n> > access with an \"old\" snapshot? (In other words, one that assumes that\n> > they're *all* unsafe for an \"old\" snapshot.)\n> >\n> > I'm thinking of a version of \"snapshot too old\" that amounts to a\n> > statement timeout that gets applied for xmin horizon type purposes in\n> > the conventional way, while only showing an error to the client if and\n> > when they access literally any buffer (though not when the relation is\n> > a system catalog). Is it possible that something along those lines is\n> > appreciably better than nothing to users? If it is, and if we can find\n> > a way to manage the transition, then maybe we could tolerate\n> > supporting this greatly simplified implementation of \"snapshot too\n> > old\".\n> \n> Hi Peter,\n> \n> Interesting idea. I'm keen to try prototyping it to see how well it\n> works out it practice. Let me know soon if you already have designs\n> on that and I'll get out of your way, otherwise I'll give it a try and\n> share what I come up with.\n\nFWIW, I think the part that is currently harder to fix is the time->xmin\nmapping and some related pieces. Second comes the test\ninfrastructure. Compared to those, adding additional checks for old\nsnapshots wouldn't be too hard - although I'd argue that the approach of\nsprinkling these tests everywhere isn't that scalable...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 12 Apr 2020 22:14:11 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Mon, Apr 13, 2020 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Apr 3, 2020 at 2:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I think that it's worth considering whether or not there are a\n> > significant number of \"snapshot too old\" users that rarely or never\n> > rely on old snapshots used by new queries. Kevin said that this\n> > happens \"in some cases\", but how many cases? Might it be that many\n> > \"snapshot too old\" users could get by with a version of the feature\n> > that makes the most conservative possible assumptions, totally giving\n> > up on the idea of differentiating which blocks are truly safe to\n> > access with an \"old\" snapshot? (In other words, one that assumes that\n> > they're *all* unsafe for an \"old\" snapshot.)\n> >\n> > I'm thinking of a version of \"snapshot too old\" that amounts to a\n> > statement timeout that gets applied for xmin horizon type purposes in\n> > the conventional way, while only showing an error to the client if and\n> > when they access literally any buffer (though not when the relation is\n> > a system catalog). Is it possible that something along those lines is\n> > appreciably better than nothing to users? If it is, and if we can find\n> > a way to manage the transition, then maybe we could tolerate\n> > supporting this greatly simplified implementation of \"snapshot too\n> > old\".\n>\n> Interesting idea. I'm keen to try prototyping it to see how well it\n> works out it practice. Let me know soon if you already have designs\n> on that and I'll get out of your way, otherwise I'll give it a try and\n> share what I come up with.\n\nHere's a quick and dirty test patch of that idea (or my understanding\nof it), just for experiments. It introduces snapshot->expire_time and\na new timer SNAPSHOT_TIMEOUT to cause the next CHECK_FOR_INTERRUPTS()\nto set snapshot->too_old on any active or registered snapshots whose\ntime has come, and then try to advance MyPgXact->xmin, without\nconsidering the ones marked too old. That gets rid of the concept of\n\"early pruning\". You can use just regular pruning, because the\nsnapshot is no longer holding the regular xmin back. Then\nTestForOldSnapshot() becomes simply if (snapshot->too_old)\nereport(...).\n\nThere are certainly some rough edges, missed details and bugs in here,\nnot least the fact (pointed out to me by Andres in an off-list chat)\nthat we sometimes use short-lived snapshots without registering them;\nwe'd have to fix that. It also does nothing to ensure that\nTestForOldSnapshot() is actually called at all the right places, which\nis still required for correct results.\n\nIf those problems can be fixed, you'd have a situation where\nsnapshot-too-old is a coarse grained, blunt instrument that\neffectively aborts your transaction even if the whole cluster is\nread-only. I am not sure if that's really truly useful to anyone (ie\nif these ODBC cursor users would be satisfied; I'm not sure I\nunderstand that use case).\n\nHmm. I suppose it must be possible to put the LSN check back: if\n(snapshot->too_old && PageGetLSN(page) > snapshot->lsn) ereport(...).\nThen the granularity would be the same as today -- block level -- but\nthe complexity is transferred from the pruning side (has to deal with\nxid time map) to the snapshot-owning side (has to deal with timers,\nCFI() and make sure all snapshots are registered). Maybe not a great\ndeal, and maybe not easier than fixing the existing bugs.\n\nOne problem is all the new setitimer() syscalls. I feel like that\ncould be improved, as could statement_timeout, by letting existing\ntimers run rather than repeatedly rescheduling eagerly, so that eg a 1\nminute timeout never gets rescheduled more than once per minute. I\nhaven't looked into that, but I guess it's no worse than the existing\nimplement's overheads anyway.\n\nPS in the patch the GUC is interpreted as milliseconds, which is more\nfun for testing but it should really be minutes like before.", "msg_date": "Wed, 15 Apr 2020 14:21:16 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Mon, Apr 13, 2020 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> FWIW, I think the part that is currently harder to fix is the time->xmin\n> mapping and some related pieces. Second comes the test\n> infrastructure. Compared to those, adding additional checks for old\n> snapshots wouldn't be too hard - although I'd argue that the approach of\n> sprinkling these tests everywhere isn't that scalable...\n\nJust trying out some ideas here... I suppose the wrapping problem\njust requires something along the lines of the attached, but now I'm\nwondering how to write decent tests for it. Using the\npg_clobber_current_snapshot_timestamp() function I mentioned in\nRobert's time->xmin thread, it's easy to build up a time map without\nresorting to sleeping etc, with something like:\n\nselect pg_clobber_current_snapshot_timestamp('3000-01-01 00:00:00Z');\nselect pg_current_xact_id();\nselect pg_clobber_current_snapshot_timestamp('3000-01-01 00:01:00Z');\nselect pg_current_xact_id();\nselect pg_clobber_current_snapshot_timestamp('3000-01-01 00:02:00Z');\nselect pg_current_xact_id();\nselect pg_clobber_current_snapshot_timestamp('3000-01-01 00:03:00Z');\nselect pg_current_xact_id();\nselect pg_clobber_current_snapshot_timestamp('3000-01-01 00:04:00Z');\n\nThen of course frozenXID can be advanced with eg update pg_database\nset datallowconn = 't' where datname = 'template0', then vacuumdb\n--freeze --all, and checked before and after with Robert's\npg_old_snapshot_time_mapping() SRF to see that it's truncated. But\nit's not really the level of stuff we'd ideally mess with in\npg_regress tests and I don't see any precent, so I guess maybe I'll\nneed to go and figure out how to write some perl.", "msg_date": "Fri, 17 Apr 2020 15:37:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Fri, Apr 17, 2020 at 3:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Apr 13, 2020 at 5:14 PM Andres Freund <andres@anarazel.de> wrote:\n> > FWIW, I think the part that is currently harder to fix is the time->xmin\n> > mapping and some related pieces. Second comes the test\n> > infrastructure. Compared to those, adding additional checks for old\n> > snapshots wouldn't be too hard - although I'd argue that the approach of\n> > sprinkling these tests everywhere isn't that scalable...\n>\n> Just trying out some ideas here...\n> ... so I guess maybe I'll\n> need to go and figure out how to write some perl.\n\nHere's a very rough sketch of what I mean. Patches 0001-0003 are\nstolen directly from Robert. I think 0005's t/001_truncate.pl\ndemonstrates that the map is purged of old xids as appropriate. I\nsuppose this style of testing based on manually advancing the hands of\ntime should also allow for testing early pruning, but it may be Monday\nbefore I can try that so I'm sharing what I have so far in case it's\nuseful... I think this really wants to be in src/test/modules, not\ncontrib, but I just bolted it on top of what Robert posted.", "msg_date": "Fri, 17 Apr 2020 17:17:18 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Thu, Apr 16, 2020 at 11:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Then of course frozenXID can be advanced with eg update pg_database\n> set datallowconn = 't' where datname = 'template0', then vacuumdb\n> --freeze --all, and checked before and after with Robert's\n> pg_old_snapshot_time_mapping() SRF to see that it's truncated. But\n> it's not really the level of stuff we'd ideally mess with in\n> pg_regress tests and I don't see any precent, so I guess maybe I'll\n> need to go and figure out how to write some perl.\n\nThe reason I put it in contrib is because I thought it would possibly\nbe useful to anyone who is actually using this feature to be able to\nlook at this information. It's unclear to me that there's any less\nreason to provide introspection here than there is for, say, pg_locks.\n\nIt's sorta unclear to me why you continued the discussion of this on\nthis thread rather than the new one I started. Seems like doing it\nover there might be clearer.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 17 Apr 2020 08:19:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Sat, Apr 18, 2020 at 12:19 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Apr 16, 2020 at 11:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Then of course frozenXID can be advanced with eg update pg_database\n> > set datallowconn = 't' where datname = 'template0', then vacuumdb\n> > --freeze --all, and checked before and after with Robert's\n> > pg_old_snapshot_time_mapping() SRF to see that it's truncated. But\n> > it's not really the level of stuff we'd ideally mess with in\n> > pg_regress tests and I don't see any precent, so I guess maybe I'll\n> > need to go and figure out how to write some perl.\n>\n> The reason I put it in contrib is because I thought it would possibly\n> be useful to anyone who is actually using this feature to be able to\n> look at this information. It's unclear to me that there's any less\n> reason to provide introspection here than there is for, say, pg_locks.\n\nMakes sense. I was talking more about the\npg_clobber_snapshot_timestamp() function I showed, which is for use by\ntests, not end users, since it does weird stuff to internal state.\n\n> It's sorta unclear to me why you continued the discussion of this on\n> this thread rather than the new one I started. Seems like doing it\n> over there might be clearer.\n\nI understood that you'd forked a new thread to discuss one particular\nproblem among the many that Andres nailed to the door, namely the xid\nmap's failure to be monotonic, and here I was responding to other\nthings from his list, namely the lack of defences against wrap-around\nand the lack of testing. Apparently I misunderstood. I will move to\nthe new thread for the next version I post, once I figure out if I can\nuse pg_clobber_snapshot_timestamp() in a TAP test to check early\nvacuum/pruning behaviour.\n\n\n", "msg_date": "Sat, 18 Apr 2020 08:39:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Fri, Apr 17, 2020 at 4:40 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I understood that you'd forked a new thread to discuss one particular\n> problem among the many that Andres nailed to the door, namely the xid\n> map's failure to be monotonic, and here I was responding to other\n> things from his list, namely the lack of defences against wrap-around\n> and the lack of testing. Apparently I misunderstood.\n\nOh, maybe I'm the one who misunderstood...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 18 Apr 2020 08:34:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> Oh, maybe I'm the one who misunderstood...\n\nSo, it's well over a year later, and so far as I can see exactly\nnothing has been done about snapshot_too_old's problems.\n\nI never liked that feature to begin with, and I would be very\nglad to undertake the task of ripping it out. If someone thinks\nthis should not happen, please commit to fixing it ... and not\n\"eventually\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 12:51:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2021-06-15 12:51:28 -0400, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > Oh, maybe I'm the one who misunderstood...\n> \n> So, it's well over a year later, and so far as I can see exactly\n> nothing has been done about snapshot_too_old's problems.\n> \n> I never liked that feature to begin with, and I would be very\n> glad to undertake the task of ripping it out. If someone thinks\n> this should not happen, please commit to fixing it ... and not\n> \"eventually\".\n\nI still think that's the most reasonable course. I actually like the\nfeature, but I don't think a better implementation of it would share\nmuch if any of the current infrastructure.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:10:36 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So, it's well over a year later, and so far as I can see exactly\n> nothing has been done about snapshot_too_old's problems.\n\nFWIW I think that the concept itself is basically reasonable. The\nimplementation is very flawed, though, so it hardly enters into it.\n\n> I never liked that feature to begin with, and I would be very\n> glad to undertake the task of ripping it out. If someone thinks\n> this should not happen, please commit to fixing it ... and not\n> \"eventually\".\n\nISTM that this is currently everybody's responsibility, and therefore\nnobody's responsibility. That's probably why the problems haven't been\nresolved yet.\n\nI propose that the revert question be explicitly timeboxed. If the\nissues haven't been fixed by some date, then \"snapshot too old\"\nautomatically gets reverted without further discussion. This gives\nqualified hackers the opportunity to save the feature if they feel\nstrongly about it, and are actually willing to take responsibility for\nits ongoing maintenance.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 10:10:42 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Tue, Jun 15, 2021 at 9:51 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> So, it's well over a year later, and so far as I can see exactly\n>> nothing has been done about snapshot_too_old's problems.\n\n> I propose that the revert question be explicitly timeboxed. If the\n> issues haven't been fixed by some date, then \"snapshot too old\"\n> automatically gets reverted without further discussion. This gives\n> qualified hackers the opportunity to save the feature if they feel\n> strongly about it, and are actually willing to take responsibility for\n> its ongoing maintenance.\n\nThe goal I have in mind is for snapshot_too_old to be fixed or gone\nin v15. I don't feel a need to force the issue sooner than that, so\nthere's plenty of time for someone to step up, if anyone wishes to.\n\nI imagine that we should just ignore the question of whether anything\ncan be done for it in the back branches. Given the problems\nidentified upthread, fixing it in a non-back-patchable way would be\nchallenging enough.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:01:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 11:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The goal I have in mind is for snapshot_too_old to be fixed or gone\n> in v15. I don't feel a need to force the issue sooner than that, so\n> there's plenty of time for someone to step up, if anyone wishes to.\n\nSeems more than reasonable to me. A year ought to be plenty of time if\nthe feature truly is salvageable.\n\nWhat do other people think? Ideally we could commit to that hard\ndeadline now. To me the important thing is to actually have a real\ndeadline that forces the issue one way or another. This situation must\nnot be allowed to drag on forever.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 11:20:51 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So, it's well over a year later, and so far as I can see exactly\n> nothing has been done about snapshot_too_old's problems.\n\nProgress has been pretty limited, but not altogether nonexistent.\n55b7e2f4d78d8aa7b4a5eae9a0a810601d03c563 fixed, or at least seemed to\nfix, the time->XID mapping, which is one of the main things that\nAndres said was broken originally. Also, there are patches on this\nthread from Thomas Munro to add some test coverage for that case,\nanother problem Andres noted in his original email. I guess it\nwouldn't be too hard to get something committed there, and I'm willing\nto do it if Thomas doesn't want to and if there's any prospect of\nsalvaging the feature.\n\nBut that's not clear to me. I'm not clear how exactly how many\nproblems we know about and need to fix in order to keep the feature,\nand I'm also not clear how deep the hole goes. Like, if we need to get\na certain number of specific bugs fixed, I might be willing to do\nthat. If we need to commit to a major rewrite of the current\nimplementation, that's more than I can do. But I guess I don't\nunderstand exactly how bad the current problems are. Reviewing\ncomplaints from Andres from this thread:\n\n> Looking at TransactionIdLimitedForOldSnapshots() I think the ts ==\n> update_ts threshold actually needs to be ts >= update_ts, right now we\n> don't handle being newer than the newest bin correctly afaict (mitigated\n> by autovacuum=on with naptime=1s doing a snapshot more often). It's hard\n> to say, because there's no comments.\n\nThis seems specific enough to be analyzed and anything that is broken\ncan be fixed.\n\n> The whole lock nesting is very hazardous. Most (all?)\n> TestForOldSnapshot() calls happen with locks on on buffers held, and can\n> acquire lwlocks itself. In some older branches we do entire *catalog\n> searches* with the buffer lwlock held (for RelationHasUnloggedIndex()).\n\nI think it's unclear whether there are live problems in master in this area.\n\n> GetSnapshotData() using snapshot->lsn = GetXLogInsertRecPtr(); as the\n> basis to detect conflicts seems dangerous to me. Isn't that ignoring\n> inserts that are already in progress?\n\nDiscussion on this point trailed off. Upon rereading, I think Andres\nis correct that there's an issue; the snapshot's LSN needs to be set\nto a value not older than the last xlog insertion that has been\ncompleted rather than, as now, the last one that is started. I guess\nto get that value we would need to do something like\nWaitXLogInsertionsToFinish(), or some approximation of it e.g.\nGetXLogWriteRecPtr() at the risk of unnecessary snapshot-too-old\nerrors.\n\n> * In read-mostly workloads it can trigger errors in sessions that are\n> much younger than old_snapshot_threshold, if the transactionid is not\n> advancing.\n>\n> I've not tried to reproduce, but I suspect this can also cause wrong\n> query results. Because a later snapshot can have the same xmin as\n> older transactions, it sure looks like we can end up with data from an\n> older xmin getting removed, but the newer snapshot's whenTaken will\n> prevent TestForOldSnapshot_impl() from raising an error.\n\nI haven't really wrapped my head around this one, but it seems\namenable to a localized fix. It basically amounts to a complaint that\nGetOldSnapshotThresholdTimestamp() is returning a newer value than it\nshould. I don't know exactly what's required to make it not do that,\nbut it doesn't seem intractable.\n\n> * I am fairly sure that it can cause crashes (or even data corruption),\n> because it assumes that DML never needs to check for old snapshots\n> (with no meaningful justificiation). Leading to heap_update/delete to\n> assume the page header is a tuple.\n\nI don't understand the issue here, really. I assume there might be a\nwrong word here, because assuming that the page header is a tuple\ndoesn't sound like a thing that would actually happen. I think one of\nthe key problems for this feature is figuring out whether you've got\nsnapshot-too-old checks in all the right places. I think what is being\nalleged here is that heap_update() and heap_delete() need them, and\nthat it's not good enough to rely on the scan that found the tuple to\nbe updated or deleted having already performed those checks. It is not\nclear to me whether that is true, or how it could cause crashes.\nAndres may have explained this to me at some point, but if he did I\nhave unfortunately forgotten.\n\nMy general point here is that I would like to know whether we have a\nfinite number of reasonably localized bugs or a three-ring disaster\nthat is unrecoverable no matter what we do. Andres seems to think it\nis the latter, and I *think* Peter Geoghegan agrees, but I think that\nthe point might be worth a little more discussion. I'm unclear whether\nTom's dislike for the feature represents hostility to the concept -\nwith which I would have to disagree - or a judgement on the quality of\nthe implementation - which might be justified. For the record, and to\nPeter's point, I think it's reasonable to set v15 feature freeze as a\ndrop-dead date for getting this feature into acceptable shape, but I\nwould like to try to nail down what we think \"acceptable\" means in\nthis context.\n\nThanks,\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:17:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> My general point here is that I would like to know whether we have a\n> finite number of reasonably localized bugs or a three-ring disaster\n> that is unrecoverable no matter what we do. Andres seems to think it\n> is the latter, and I *think* Peter Geoghegan agrees, but I think that\n> the point might be worth a little more discussion.\n\nTBH, I am not clear on that either.\n\n> I'm unclear whether\n> Tom's dislike for the feature represents hostility to the concept -\n> with which I would have to disagree - or a judgement on the quality of\n> the implementation - which might be justified.\n\nI think it's a klugy, unprincipled solution to a valid real-world\nproblem. I suspect the implementation issues are not unrelated to\nthe kluginess of the concept. Thus, I would really like to see us\nthrow this away and find something better. I admit I have nothing\nto offer about what a better solution to the problem would look like.\nBut I would really like it to not involve random-seeming query failures.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 15:49:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > My general point here is that I would like to know whether we have a\n> > finite number of reasonably localized bugs or a three-ring disaster\n> > that is unrecoverable no matter what we do. Andres seems to think it\n> > is the latter, and I *think* Peter Geoghegan agrees, but I think that\n> > the point might be worth a little more discussion.\n>\n> TBH, I am not clear on that either.\n\nI don't know for sure which it is, but that in itself isn't actually\nwhat matters to me. The most concerning thing is that I don't really\nknow how to *assess* the design now. The clear presence of at least\nseveral very severe bugs doesn't necessarily prove anything (it just\n*hints* at major design problems).\n\nIf I could make a very clear definitive statement on this then I'd\nprobably have to do ~1/3 of the total required work -- that'd be my\nguess. If it was easy to be quite sure here then we wouldn't still be\nhere 12 months later. In any case I don't think that the feature\ndeserves to be treated all that differently to something that was\ncommitted much more recently, given what we know. Frankly it took me\nabout 5 minutes to find a very serious bug in the feature, pretty much\nwithout giving it any thought. That is not a good sign.\n\n> I think it's a klugy, unprincipled solution to a valid real-world\n> problem. I suspect the implementation issues are not unrelated to\n> the kluginess of the concept. Thus, I would really like to see us\n> throw this away and find something better. I admit I have nothing\n> to offer about what a better solution to the problem would look like.\n> But I would really like it to not involve random-seeming query failures.\n\nI would be very happy to see somebody take this up, because it is\nimportant. The reality is that anybody that undertakes this task\nshould start with the assumption that they're starting from scratch,\nat least until they learn otherwise. So ISTM that it might as well be\ntrue that it needs a total rewrite, even if it turns out to not be\nstrictly true.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:12:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 12:17 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> My general point here is that I would like to know whether we have a\n> finite number of reasonably localized bugs or a three-ring disaster\n> that is unrecoverable no matter what we do. Andres seems to think it\n> is the latter, and I *think* Peter Geoghegan agrees, but I think that\n> the point might be worth a little more discussion. I'm unclear whether\n> Tom's dislike for the feature represents hostility to the concept -\n> with which I would have to disagree - or a judgement on the quality of\n> the implementation - which might be justified. For the record, and to\n> Peter's point, I think it's reasonable to set v15 feature freeze as a\n> drop-dead date for getting this feature into acceptable shape, but I\n> would like to try to nail down what we think \"acceptable\" means in\n> this context.\n\nWhat I had in mind was this: a committer adopting the feature\nthemselves. The committer would be morally obligated to maintain the\nfeature on an ongoing basis, just as if they were the original\ncommitter. This seems like the only sensible way of resolving this\nissue once and for all.\n\nIf it really is incredibly important that we keep this feature, or one\nlike it, then I have to imagine that somebody will step forward --\nthere is still ample opportunity. But if nobody steps forward, I'll be\nforced to conclude that perhaps it wasn't quite as important as I\nfirst thought. Anybody can agree that it's important in an abstract\nsense -- that's easy. What we need is a committer willing to sign on\nthe dotted line, which we're no closer to today than we were a year\nago. Actions speak louder than words.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:32:11 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 1, 2020 at 4:59 PM Andres Freund <andres@anarazel.de> wrote:\n> The primary issue here is that there is no TestForOldSnapshot() in\n> heap_hot_search_buffer(). Therefore index fetches will never even try to\n> detect that tuples it needs actually have already been pruned away.\n\nThis is still true, right? Nobody fixed this bug after 14 months? Even\nthough we're talking about returning rows that are not visible to the\nxact's snapshot?\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 14:45:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> What I had in mind was this: a committer adopting the feature\n> themselves. The committer would be morally obligated to maintain the\n> feature on an ongoing basis, just as if they were the original\n> committer. This seems like the only sensible way of resolving this\n> issue once and for all.\n\nYeah, it seems clear that we need somebody to do that, given that\nKevin Grittner has been inactive for awhile. Even if the known\nproblems can be resolved by drive-by patches, I think this area\nneeds an ongoing commitment from someone.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 15 Jun 2021 17:50:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2021-06-15 15:17:05 -0400, Robert Haas wrote:\n> But that's not clear to me. I'm not clear how exactly how many\n> problems we know about and need to fix in order to keep the feature,\n> and I'm also not clear how deep the hole goes. Like, if we need to get\n> a certain number of specific bugs fixed, I might be willing to do\n> that. If we need to commit to a major rewrite of the current\n> implementation, that's more than I can do. But I guess I don't\n> understand exactly how bad the current problems are. Reviewing\n> complaints from Andres from this thread:\n\nOne important complaints I think your (useful!) list missed is that there's\nmissing *read side* checks that demonstrably lead to wrong query results:\nhttps://www.postgresql.org/message-id/CAH2-Wz%3DFQ9rbBKkt1nXvz27kmd4A8i1%2B7dcLTNqpCYibxX83VQ%40mail.gmail.com\nand that it's currently very hard to figure out where they need to be, because\nthere's no real explained model of what needs to be checked and what not.\n\n\n> > * I am fairly sure that it can cause crashes (or even data corruption),\n> > because it assumes that DML never needs to check for old snapshots\n> > (with no meaningful justificiation). Leading to heap_update/delete to\n> > assume the page header is a tuple.\n> \n> I don't understand the issue here, really. I assume there might be a\n> wrong word here, because assuming that the page header is a tuple\n> doesn't sound like a thing that would actually happen.\n\nI suspect what I was thinking of is that a tuple could get pruned away due to\ns_t_o, which would leave a LP_DEAD item around. As heap_update/delete neither\nchecks s_t_o, nor balks at targetting LP_DEAD items, we'd use the offset from\na the LP_DEAD item. ItemIdSetDead() sets lp_off to 0 - which would mean that\nthe page header is interpreted as a tuple. Right?\n\n\n> I think one of the key problems for this feature is figuring out\n> whether you've got snapshot-too-old checks in all the right places. I\n> think what is being alleged here is that heap_update() and\n> heap_delete() need them, and that it's not good enough to rely on the\n> scan that found the tuple to be updated or deleted having already\n> performed those checks. It is not clear to me whether that is true, or\n> how it could cause crashes. Andres may have explained this to me at\n> some point, but if he did I have unfortunately forgotten.\n\nI don't think it is sufficient to rely on the scan. That works only as long as\nthe page with the to-be-modified tuple is pinned (since that'd prevent pruning\n/ vacuuming from working on the page), but I am fairly sure that there are\nplans where the target tuple is not pinned from the point it was scanned until\nit is modified. In which case it is entirely possible that the u/d target can\nbe pruned away due to s_t_o between the scan checking s_t_o and the u/d\nexecuting.\n\n\n> My general point here is that I would like to know whether we have a\n> finite number of reasonably localized bugs or a three-ring disaster\n> that is unrecoverable no matter what we do. Andres seems to think it\n> is the latter\n\nCorrect. I think there's numerous architectural issues the way the feature is\nimplemented right now, and that it'd be a substantial project to address them.\n\n\n> For the record, and to Peter's point, I think it's reasonable to set\n> v15 feature freeze as a drop-dead date for getting this feature into\n> acceptable shape, but I would like to try to nail down what we think\n> \"acceptable\" means in this context.\n\nI think the absolute minimum would be to have\n- actually working tests\n- a halfway thorough code review of the feature\n- added documentation explaining where exactly s_t_o tests need to be\n- bugfixes obviously\n\nIf I were to work on the feature, I cannot imagine being sufficient confident\nthe feature works as long as the xid->time mapping granularity is a\nminute. It's just not possible to write reasonable tests with the granularity\nbeing that high. Or even to do manual tests of it - I'm not that patient. But\nI \"can accept\" if somebody actually doing the work differs on this.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Jun 2021 17:20:46 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 7:17 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Progress has been pretty limited, but not altogether nonexistent.\n> 55b7e2f4d78d8aa7b4a5eae9a0a810601d03c563 fixed, or at least seemed to\n> fix, the time->XID mapping, which is one of the main things that\n> Andres said was broken originally. Also, there are patches on this\n> thread from Thomas Munro to add some test coverage for that case,\n> another problem Andres noted in his original email. I guess it\n> wouldn't be too hard to get something committed there, and I'm willing\n> to do it if Thomas doesn't want to and if there's any prospect of\n> salvaging the feature.\n\nFTR the latest patches are on a different thread[1]. I lost steam on\nthat stuff because I couldn't find a systematic way to deal with the\nlack of checks all over the place, or really understand how the whole\nsystem fits together with confidence. Those patches to fix an xid\nwraparound bug and make the testing work better may be useful and I'll\nbe happy to rebase them, depending on how this discussion goes, but it\nseems a bit like the proverbial deckchairs on the Titanic from what\nI'm reading... I think the technique I showed for exercising some\nbasic STO mechanisms and scenarios is probably useful, but I currently\nhave no idea how to prove much of anything about the whole system and\nam not personally in a position to dive into that rabbit hole in a\nPG15 time scale.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKGJyw%3DuJ4eL1x%3D%2BvKm16fLaxNPvKUYtnChnRkSKi024u_A%40mail.gmail.com\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:32:01 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 02:32:11PM -0700, Peter Geoghegan wrote:\n> What I had in mind was this: a committer adopting the feature\n> themselves. The committer would be morally obligated to maintain the\n> feature on an ongoing basis, just as if they were the original\n> committer. This seems like the only sensible way of resolving this\n> issue once and for all.\n> \n> If it really is incredibly important that we keep this feature, or one\n> like it, then I have to imagine that somebody will step forward --\n> there is still ample opportunity. But if nobody steps forward, I'll be\n> forced to conclude that perhaps it wasn't quite as important as I\n> first thought.\n\nHackers are rather wise, but the variety of PostgreSQL use is enormous. We\nsee that, among other ways, when regression fixes spike in each vN.1. The\n$SUBJECT feature was born in response to a user experience; a lack of hacker\ninterest doesn't invalidate that user experience. We face these competing\ninterests, at least:\n\n1) Some users want the feature kept so their application can use a certain\n pattern of long-running, snapshot-bearing transactions.\n\n2) (a) Some hackers want the feature gone so they can implement changes\n without making those changes cooperate with this feature. (b) Bugs in this\n feature make such cooperation materially harder.\n\n3) Some users want the feature gone because (2) is slowing the progress of\n features they do want.\n\n4) Some users want the feature kept because they don't use it but will worry\n what else is vulnerable to removal. PostgreSQL has infrequent history of\n removing released features. Normally, PostgreSQL lets some bugs languish\n indefinitely, e.g. in\n https://wiki.postgresql.org/wiki/PostgreSQL_13_Open_Items#Live_issues\n\n5) Some users want the feature gone because they try it, find a bug, and\n regret trying it or fear trying other features.\n\nA hacker adopting the feature would be aiming to reduce (2)(b) to zero,\nessentially. What other interests are relevant?\n\n\n", "msg_date": "Tue, 15 Jun 2021 21:59:45 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 9:59 PM Noah Misch <noah@leadboat.com> wrote:\n> Hackers are rather wise, but the variety of PostgreSQL use is enormous. We\n> see that, among other ways, when regression fixes spike in each vN.1. The\n> $SUBJECT feature was born in response to a user experience; a lack of hacker\n> interest doesn't invalidate that user experience.\n\nI agree that it would be good to hear from some users about this. If a\nless painful workaround is possible at all, then users may be able to\nhelp -- maybe it'll be possible to cut scope.\n\n> We face these competing\n> interests, at least:\n\n> 1) Some users want the feature kept so their application can use a certain\n> pattern of long-running, snapshot-bearing transactions.\n\nUndoubtedly true.\n\n> 2) (a) Some hackers want the feature gone so they can implement changes\n> without making those changes cooperate with this feature. (b) Bugs in this\n> feature make such cooperation materially harder.\n\nIs that really true? Though it was probably true back when this thread\nwas started last year, things have changed. Andres found a way to work\naround the problems he had with snapshot too old, AFAIK.\n\n> A hacker adopting the feature would be aiming to reduce (2)(b) to zero,\n> essentially. What other interests are relevant?\n\nThe code simply isn't up to snuff. If the code was in a niche contrib\nmodule then maybe it would be okay to let this slide. But the fact is\nthat it touches critical parts of the system. This cannot be allowed\nto drag on forever. It's as simple as that.\n\nI admit that I think that the most likely outcome is that it gets\nreverted. I don't feel great about that. What else can be done about\nit that will really help the situation? No qualified person is likely\nto have the time to commit to fixing snapshot too old. Isn't that the\nreal problem here?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 22:47:45 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 10:47:45PM -0700, Peter Geoghegan wrote:\n> On Tue, Jun 15, 2021 at 9:59 PM Noah Misch <noah@leadboat.com> wrote:\n> > Hackers are rather wise, but the variety of PostgreSQL use is enormous. We\n> > see that, among other ways, when regression fixes spike in each vN.1. The\n> > $SUBJECT feature was born in response to a user experience; a lack of hacker\n> > interest doesn't invalidate that user experience.\n> \n> I agree that it would be good to hear from some users about this. If a\n> less painful workaround is possible at all, then users may be able to\n> help -- maybe it'll be possible to cut scope.\n\nIt would be good. But if we don't hear from users in 2021 or 2022, that\ndoesn't invalidate what users already said in 2016.\n\n> > We face these competing\n> > interests, at least:\n> \n> > 1) Some users want the feature kept so their application can use a certain\n> > pattern of long-running, snapshot-bearing transactions.\n> \n> Undoubtedly true.\n> \n> > 2) (a) Some hackers want the feature gone so they can implement changes\n> > without making those changes cooperate with this feature. (b) Bugs in this\n> > feature make such cooperation materially harder.\n> \n> Is that really true? Though it was probably true back when this thread\n> was started last year, things have changed. Andres found a way to work\n> around the problems he had with snapshot too old, AFAIK.\n\nWhen I say \"some hackers\", I don't mean that specific people think such\nthoughts right now. I'm saying that the expected cost of future cooperation\nwith the feature is nonzero, and bugs in the feature raise that cost. Perhaps\n(5) has more weight than (2). (If (2), (3) and (5) all have little weight,\nthen PostgreSQL should just keep the feature with its bugs.)\n\n> > A hacker adopting the feature would be aiming to reduce (2)(b) to zero,\n> > essentially. What other interests are relevant?\n> \n> The code simply isn't up to snuff. If the code was in a niche contrib\n> module then maybe it would be okay to let this slide. But the fact is\n> that it touches critical parts of the system. This cannot be allowed\n> to drag on forever. It's as simple as that.\n\nEven if we were to stipulate that this feature \"isn't up to snuff\", purging\nPostgreSQL of substandard features may or may not add sufficient value to\ncompensate for (1) and (4).\n\n\n", "msg_date": "Tue, 15 Jun 2021 23:24:20 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Tue, Jun 15, 2021 at 11:24 PM Noah Misch <noah@leadboat.com> wrote:\n> When I say \"some hackers\", I don't mean that specific people think such\n> thoughts right now. I'm saying that the expected cost of future cooperation\n> with the feature is nonzero, and bugs in the feature raise that cost.\n\nI see.\n\n> > > A hacker adopting the feature would be aiming to reduce (2)(b) to zero,\n> > > essentially. What other interests are relevant?\n> >\n> > The code simply isn't up to snuff. If the code was in a niche contrib\n> > module then maybe it would be okay to let this slide. But the fact is\n> > that it touches critical parts of the system. This cannot be allowed\n> > to drag on forever. It's as simple as that.\n>\n> Even if we were to stipulate that this feature \"isn't up to snuff\", purging\n> PostgreSQL of substandard features may or may not add sufficient value to\n> compensate for (1) and (4).\n\nI'm more concerned about 1 (compatibility) than about 4 (perception\nthat we deprecate things when we shouldn't), FWIW.\n\nIt's not that this is a substandard feature in the same way that (say)\ncontrib/ISN is a substandard feature -- it's not about the quality\nlevel per se. Nor is it the absolute number of bugs. The real issue is\nthat this is a substandard feature that affects crucial areas of the\nsystem. Strategically important things that we really cannot afford to\nbreak.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 15 Jun 2021 23:48:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "I think Andres's point earlier is the one that stands out the most for me:\n\n> I still think that's the most reasonable course. I actually like the\n> feature, but I don't think a better implementation of it would share\n> much if any of the current infrastructure.\n\nThat makes me wonder whether ripping the code out early in the v15\ncycle wouldn't be a better choice. It would make it easier for someone\nto start work on a new implementation.\n\nThere is the risk that the code would still be out and no new\nimplementation would have appeared by the release of v15 but it sounds\nlike that's people are leaning towards ripping it out at that point\nanyways.\n\nFwiw I too think the basic idea of the feature is actually awesome.\nThere are tons of use cases where you might have one long-lived\ntransaction working on a dedicated table (or even a schema) that will\nnever look at the rapidly mutating tables in another schema and never\ntrigger the error even though those tables have been vacuumed many\ntimes over during its run-time.\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:41:42 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Fwiw I too think the basic idea of the feature is actually awesome.\n> There are tons of use cases where you might have one long-lived\n> transaction working on a dedicated table (or even a schema) that will\n> never look at the rapidly mutating tables in another schema and never\n> trigger the error even though those tables have been vacuumed many\n> times over during its run-time.\n\nI agree that's a great use-case. I don't like this implementation though.\nI think if you want to set things up like that, you should draw a line\nbetween the tables it's okay for the long transaction to touch and those\nit isn't, and then any access to the latter should predictably draw an\nerror. I really do not like the idea that it might work anyway, because\nthen if you accidentally break the rule, you have an application that just\nfails randomly ... probably only on the days when the boss wants that\nreport *now* not later.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:00:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Greetings,\n\n* Greg Stark (stark@mit.edu) wrote:\n> I think Andres's point earlier is the one that stands out the most for me:\n> \n> > I still think that's the most reasonable course. I actually like the\n> > feature, but I don't think a better implementation of it would share\n> > much if any of the current infrastructure.\n> \n> That makes me wonder whether ripping the code out early in the v15\n> cycle wouldn't be a better choice. It would make it easier for someone\n> to start work on a new implementation.\n> \n> There is the risk that the code would still be out and no new\n> implementation would have appeared by the release of v15 but it sounds\n> like that's people are leaning towards ripping it out at that point\n> anyways.\n> \n> Fwiw I too think the basic idea of the feature is actually awesome.\n> There are tons of use cases where you might have one long-lived\n> transaction working on a dedicated table (or even a schema) that will\n> never look at the rapidly mutating tables in another schema and never\n> trigger the error even though those tables have been vacuumed many\n> times over during its run-time.\n\nI've long felt that the appropriate approach to addressing that is to\nimprove on VACUUM and find a way to do better than just having the\nconditional of 'xmax < global min' drive if we can mark a given tuple as\nno longer visible to anyone.\n\nNot sure that all of the snapshot-too-old use cases could be solved that\nway, nor am I even sure it's actually possible to make VACUUM smarter in\nthat way without introducing other problems or having to track much more\ninformation than we do today, but it'd sure be nice if we could address\nthe use-case you outline above while also not introducing query\nfailures if that transaction does happen to decide to go look at some\nother table (naturally, the tuples which are in that rapidly mutating\ntable that *would* be visible to the long-running transaction would have\nto be kept around to make things work, but if it's rapidly mutating then\nthere's very likely lots of tuples that the long-running transaction\ncan't see in it, and which nothing else can either, and therefore could\nbe vacuumed).\n\nThanks,\n\nStephen", "msg_date": "Wed, 16 Jun 2021 12:11:45 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> I've long felt that the appropriate approach to addressing that is to\n> improve on VACUUM and find a way to do better than just having the\n> conditional of 'xmax < global min' drive if we can mark a given tuple as\n> no longer visible to anyone.\n\nYeah, I think this scenario of a few transactions with old snapshots\nand the rest with very new ones could be improved greatly if we exposed\nmore info about backends' snapshot state than just \"oldest xmin\". But\nthat might be expensive to do.\n\nI remember that Heikki was fooling with a patch that reduced snapshots\nto LSNs. If we got that done, it'd be practical to expose complete\ninfo about backends' snapshot state in a lot of cases (i.e., anytime\nyou had less than N live snapshots).\n\nOf course, there's still the question of how VACUUM could cheaply\napply such info to decide what could be purged.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Jun 2021 13:04:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I remember that Heikki was fooling with a patch that reduced snapshots\n> to LSNs. If we got that done, it'd be practical to expose complete\n> info about backends' snapshot state in a lot of cases (i.e., anytime\n> you had less than N live snapshots).\n>\n> Of course, there's still the question of how VACUUM could cheaply\n> apply such info to decide what could be purged.\n\nI would think that it wouldn't really matter inside VACUUM -- it would\nonly really need to be either an opportunistic pruning or an\nopportunistic index deletion thing -- probably both. Most of the time\nVACUUM doesn't seem to end up doing most of the work of removing\ngarbage versions. It's mostly useful for \"floating garbage\", to use\nthe proper GC memory management term.\n\nIt's not just because opportunistic techniques are where the real work\nof removing garbage is usually done these days. It's also because\nopportunistic techniques are triggered in response to an immediate\nproblem, like an overfull heap page or an imminent page split that\nwe'd like to avoid -- they can actually see what's going on at the\nlocal level in a way that doesn't really work inside VACUUM.\n\nThis also means that they can cycle through strategies for a page,\nstarting with the cheapest and coarsest grained cleanup, progressing\nto finer grained cleanup. You only really need the finer grained\ncleanup when the coarse grained cleanup (simple OldestXmin style\ncutoff) fails. And even then you only need to use the slowpath when\nyou have a pretty good idea that it'll actually be useful -- you at\nleast know up front that there are a bunch of RECENTLY_DEAD tuples\nthat very well might be freeable once you use the slow path.\n\nWe can leave the floating garbage inside heap pages that hardly ever\nget opportunistic pruning behind for VACUUM. We might even find that\nan advanced strategy that does clever things in order to cleanup\nintermediate versions isn't actually needed all that often (it's\nexecuted perhaps orders of magnitude less frequently than simple\nopportunistic pruning is executed) -- even when the clever technique\nreally helps the workload. Technically opportunistic pruning might be\n99%+ effective, even when it doesn't look like it's effective to\nusers. The costs in this area are often very nonlinear. It can be very\ncounterintuitive.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 10:44:49 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2021-06-15 21:59:45 -0700, Noah Misch wrote:\n> Hackers are rather wise, but the variety of PostgreSQL use is enormous. We\n> see that, among other ways, when regression fixes spike in each vN.1. The\n> $SUBJECT feature was born in response to a user experience; a lack of hacker\n> interest doesn't invalidate that user experience. We face these competing\n> interests, at least:\n> \n> 1) Some users want the feature kept so their application can use a certain\n> pattern of long-running, snapshot-bearing transactions.\n\nThis is obviously true. However, given that the feature practically did\nnot work at all before 55b7e2f4d78d8aa7b4a5eae9a0a810601d03c563 and\nstill cannot really be described to work (e..g index scans returning\nwrong query results), and there have been two complaints about it as far\nas I know leads me to believe that it does not have a great many\nfeatures.\n\n\n> 2) (a) Some hackers want the feature gone so they can implement changes\n> without making those changes cooperate with this feature. (b) Bugs in this\n> feature make such cooperation materially harder.\n\nI think the a) part is a large problem. Primarily because it's so\nunclear what one exactly has to do where (no docs/comments explaining\nthat) and because there's no usable test framework.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:06:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2021-06-16 13:04:07 -0400, Tom Lane wrote:\n> Yeah, I think this scenario of a few transactions with old snapshots\n> and the rest with very new ones could be improved greatly if we exposed\n> more info about backends' snapshot state than just \"oldest xmin\". But\n> that might be expensive to do.\n\nI think it'd be pretty doable now. The snapshot scalability changes\nseparated out information needed to do vacuuming / pruning (i.e. xmin)\nfrom the information needed to build a snapshot (xid, flags, subxids\netc). Because xmin is not frequently accessed from other backends\nanymore, it is not important anymore to touch it as rarely as\npossible. From the cross-backend POV I think it'd be practically free to\ntrack a backend's xmax now.\n\nIt's not quite as obvious that it'd essentially free to track a\nbackend's xmax across all the snapshots it uses. I think we'd basically\nneed a second pairingheap in snapmgr.c to track the \"most advanced\"\nxmax? That's *probably* fine, but I'm not 100% - Heikki wrote a faster\nheap implementation for snapmgr.c for a reason I assume.\n\n\nI think the hard part of this would be much more on the pruning / vacuum\nside of things. There's two difficulties:\n\n1) Keeping it cheap to determine whether a tuple can be vacuumed,\n particularly while doing on-access pruning. This likely means that\n we'd only assemble the information to do visibility determination for\n rows above the \"dead for everybody\" horizon when encountering a\n sufficiently old tuple. And then we need a decent datastructure for\n checking whether an xid is in one of the \"not needed\" xid ranges.\n\n This seems solvable.\n\n2) Modeling when it is safe to remove row versions. It is easy to remove\n a tuple that was inserted and deleted within one \"not needed\" xid\n range, but it's far less obvious when it is safe to remove row\n versions where prior/later row versions are outside of such a gap.\n\n Consider e.g. an update chain where the oldest snapshot can see one\n row version, then there is a chain of rows that could be vacuumed\n except for the old snapshot, and then there's a live version. If the\n old session updates the row version that is visible to it, it needs\n to be able to follow the xid chain.\n\n This seems hard to solve in general.\n\n It perhaps is sufficiently effective to remove row version chains\n entirely within one removable xid range. And it'd probably doable to\n also address the case where a chain is larger than one range, as long\n as all the relevant row versions are within one page: We can fix up\n the ctids of older still visible row versions to point to the\n successor of pruned row versions.\n\n But I have a hard time seeing a realistic approach to removing chains\n that span xid ranges and multiple pages. The locking and efficiency\n issues seem substantial.\n\nGreetings,\n\nAndres\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:27:57 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 11:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > 2) (a) Some hackers want the feature gone so they can implement changes\n> > without making those changes cooperate with this feature. (b) Bugs in this\n> > feature make such cooperation materially harder.\n>\n> I think the a) part is a large problem. Primarily because it's so\n> unclear what one exactly has to do where (no docs/comments explaining\n> that) and because there's no usable test framework.\n\nRight. This is what I meant yesterday, when talking about design\nissues. It's really not about the bugs so much. We probably could go\nthrough them one by one until things stopped being visibly broken,\nwithout going to a huge amount of trouble -- it's not that hard to\npaper over these things without anybody noticing. This is clear just\nwhen you look at how long it took anybody to notice the problems we do\nhave. Whether or not that amounts to \"just fixing the bugs\" is perhaps\nopen to interpretation. Either way I would not be comfortable with\neven claiming that \"fixing the bugs\" in this way actually makes the\nsituation better overall -- it might make it even worse. So in a more\nfundamental sense it would actually be really hard to fix these bugs.\nI would never have confidence in a fix like that.\n\nI really don't see a way around it -- we have to declare technical\ndebt bankruptcy here. Whether or not that means reverting the feature\nor rewriting it from scratch remains to be seen. That's another\nquestion entirely, and has everything to do with somebody's\nwillingness to adopt the project and little to do with how any\nindividual feels about it -- just like with a new feature. It does us\nno good to put off the question indefinitely.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 11:29:36 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Hi,\n\nOn 2021-06-16 10:44:49 -0700, Peter Geoghegan wrote:\n> On Wed, Jun 16, 2021 at 10:04 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Of course, there's still the question of how VACUUM could cheaply\n> > apply such info to decide what could be purged.\n\n> I would think that it wouldn't really matter inside VACUUM -- it would\n> only really need to be either an opportunistic pruning or an\n> opportunistic index deletion thing -- probably both. Most of the time\n> VACUUM doesn't seem to end up doing most of the work of removing\n> garbage versions. It's mostly useful for \"floating garbage\", to use\n> the proper GC memory management term.\n\nI don't fully agree with this. For one, there are workloads where VACUUM\nremoves the bulk of the dead tuples. For another, slowing down VACUUM\ncan cause a slew of follow-on problems, so being careful to not\nintroduce new bottlenecks is important. And I don't think just doing\nthis optimization as part of on-access pruning is reasonable\nsolution. And it's not like making on-access pruning slower is\nunproblematic either.\n\nBut as I said nearby, I think the hardest part is figuring out how to\ndeal with ctid chains, not the efficiency of the xid->visibility lookup\n(or the collection of data necessary for that lookup).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:06:37 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 11:27 AM Andres Freund <andres@anarazel.de> wrote:\n> 2) Modeling when it is safe to remove row versions. It is easy to remove\n> a tuple that was inserted and deleted within one \"not needed\" xid\n> range, but it's far less obvious when it is safe to remove row\n> versions where prior/later row versions are outside of such a gap.\n>\n> Consider e.g. an update chain where the oldest snapshot can see one\n> row version, then there is a chain of rows that could be vacuumed\n> except for the old snapshot, and then there's a live version. If the\n> old session updates the row version that is visible to it, it needs\n> to be able to follow the xid chain.\n>\n> This seems hard to solve in general.\n\nAs I've said to you before, I think that it would make sense to solve\nthe problem inside heap_index_delete_tuples() first (for index tuple\ndeletion) -- implement and advanced version for heap pruning later.\nThat gives users a significant benefit without requiring that you\nsolve this hard problem with xmin/xmax and update chains.\n\nI don't think that it matters that index AMs still only have LP_DEAD\nbits set when tuples are dead to all snapshots including the oldest.\nNow that we can batch TIDs within each call to\nheap_index_delete_tuples() to pick up \"extra\" deletable TIDs from the\nsame heap blocks, we'll often be able to delete a significant number\nof extra index tuples whose TIDs are in a \"not needed\" range. Whereas\ntoday, without the \"not needed\" range mechanism in place, we just\ndelete the index tuples that are LP_DEAD-set already, plus maybe a few\nothers (\"extra index tuples\" that are not even needed by the oldest\nsnapshot) -- but that's it.\n\nWe might miss our chance to ever delete the nearby index tuples\nforever, just because we missed the opportunity once. Recall that the\nLP_DEAD bit being set for an index tuple isn't just information about\nthe index tuple in Postgres 14+ -- it also suggests that the *heap\nblock* has many more index tuples that we can delete that aren't\nLP_DEAD set in the index. And so nbtree will check those extra nearby\nTIDs out in passing within heap_index_delete_tuples(). We currently\nlose this valuable hint about the heap block forever if we delete the\nLP_DEAD-set index tuples, unless we get lucky and somebody sets a few\nmore index tuples for the same heap blocks before the next time the\nleaf page fills up (and heap_index_delete_tuples() must be called).\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:08:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > I would think that it wouldn't really matter inside VACUUM -- it would\n> > only really need to be either an opportunistic pruning or an\n> > opportunistic index deletion thing -- probably both. Most of the time\n> > VACUUM doesn't seem to end up doing most of the work of removing\n> > garbage versions. It's mostly useful for \"floating garbage\", to use\n> > the proper GC memory management term.\n>\n> I don't fully agree with this. For one, there are workloads where VACUUM\n> removes the bulk of the dead tuples.\n\nIt's definitely much more important that VACUUM run often when non-HOT\nupdates are the norm, and there are lots of them. But that's probably\nnot going to be helped all that much by this technique anyway.\n\nMostly I'm just saying I'd start elsewhere and do heapam later. And\nprobably do VACUUM itself last of all, if that usefully cut scope.\n\n> For another, slowing down VACUUM\n> can cause a slew of follow-on problems, so being careful to not\n> introduce new bottlenecks is important. And I don't think just doing\n> this optimization as part of on-access pruning is reasonable\n> solution. And it's not like making on-access pruning slower is\n> unproblematic either.\n\nI think that on-access pruning is much more important because it's the\nonly hope we have of keeping the original heap page intact, in the\nsense that there are no non-HOT updates over time, though there may be\nmany HOT updates. And no LP_DEAD items ever accumulate. It's not so\nmuch about cleaning up bloat as it is about *preserving* the heap\npages in this sense.\n\nIf in the long run it's impossible to keep the page intact in this\nsense then we will still have most of our current problems. It might\nnot make that much practical difference if we simply delay the problem\n-- we kinda have to prevent it entirely, at least for a given\nworkload. So I'm mostly concerned about keeping things stable over\ntime, at the level of individual pages.\n\n> But as I said nearby, I think the hardest part is figuring out how to\n> deal with ctid chains, not the efficiency of the xid->visibility lookup\n> (or the collection of data necessary for that lookup).\n\nDefinitely true.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 16 Jun 2021 12:18:20 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "Greetings,\n\n* Peter Geoghegan (pg@bowt.ie) wrote:\n> On Wed, Jun 16, 2021 at 12:06 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I would think that it wouldn't really matter inside VACUUM -- it would\n> > > only really need to be either an opportunistic pruning or an\n> > > opportunistic index deletion thing -- probably both. Most of the time\n> > > VACUUM doesn't seem to end up doing most of the work of removing\n> > > garbage versions. It's mostly useful for \"floating garbage\", to use\n> > > the proper GC memory management term.\n> >\n> > I don't fully agree with this. For one, there are workloads where VACUUM\n> > removes the bulk of the dead tuples.\n> \n> It's definitely much more important that VACUUM run often when non-HOT\n> updates are the norm, and there are lots of them. But that's probably\n> not going to be helped all that much by this technique anyway.\n\nI don't follow this argument. Surely there are many, many cases out\nthere where there's very few HOT updates but lots of non-HOT updates\nwhich create lots of dead rows that can't currently be cleaned up if\nthere's a long running transaction hanging around.\n\n> Mostly I'm just saying I'd start elsewhere and do heapam later. And\n> probably do VACUUM itself last of all, if that usefully cut scope.\n\nNot quite following what 'elsewhere' means here or what it would entail\nif it involves cleaning up dead tuples but doesn't involve heapam. I\ncan sort of follow the idea of working on the routine page-level cleanup\nof tuples rather than VACUUM, except that would seem to require one to\ndeal with the complexities of ctid chains discussed below and therefore\nbe a larger and more complicated effort than if one were to tackle\nVACUUM and perhaps in the first round cut scope by explicitly ignoring\nctid chains.\n\n> > For another, slowing down VACUUM\n> > can cause a slew of follow-on problems, so being careful to not\n> > introduce new bottlenecks is important. And I don't think just doing\n> > this optimization as part of on-access pruning is reasonable\n> > solution. And it's not like making on-access pruning slower is\n> > unproblematic either.\n\nI don't know that slowing down VACUUM, which already goes purposefully\nslow by default when run out of autovacuum, needs to really be stressed\nover, particularly when what we're talking about here are CPU cycles. I\ndo think it'd make sense to have a heuristic which decides if we're\ngoing to put in the effort to try to do this kind of pruning. That is-\nif the global Xmin and the current transaction are only a few thousand\napart or something along those lines then don't bother, but if there's\nbeen 100s of thousands of transactions then enable it (perhaps allowing\ncontrol over this or allowing users to explicitly ask VACUUM to 'work\nharder' or such).\n\n> I think that on-access pruning is much more important because it's the\n> only hope we have of keeping the original heap page intact, in the\n> sense that there are no non-HOT updates over time, though there may be\n> many HOT updates. And no LP_DEAD items ever accumulate. It's not so\n> much about cleaning up bloat as it is about *preserving* the heap\n> pages in this sense.\n> \n> If in the long run it's impossible to keep the page intact in this\n> sense then we will still have most of our current problems. It might\n> not make that much practical difference if we simply delay the problem\n> -- we kinda have to prevent it entirely, at least for a given\n> workload. So I'm mostly concerned about keeping things stable over\n> time, at the level of individual pages.\n\nI do think that's a worthwhile goal, but if we could get some kind of\ncleanup happening, that strikes me as better than the nothing that we\nhave today. Which side makes sense to tackle first is certainly a\ndiscussion that could be had but I'd go for \"do the simple thing first\".\n\n> > But as I said nearby, I think the hardest part is figuring out how to\n> > deal with ctid chains, not the efficiency of the xid->visibility lookup\n> > (or the collection of data necessary for that lookup).\n> \n> Definitely true.\n\nIt strikes me that stressing over ctid chains, while certainly something\nto consider, at this point is putting the cart before the horse in this\ndiscussion- there's not much sense in it if we haven't actually got the\ndata collection piece figured out and working (and hopefully in a manner\nthat minimizes the overhead from it) and then worked out the logic to\nfigure out if a given tuple is actually visible to any running\ntransaction. As I say above, it seems like it'd be a great win even if\nit was initially only able to deal with 'routine'/non-chained cases and\nonly with VACUUM.\n\nThe kind of queue tables that I'm thinking of, at least, are ones like\nwhat PgQ uses: https://github.com/pgq/pgq\n\nNow, that already works around our lacking here by using TRUNCATE and\ntable rotation, but if we improved here then it'd potentially be able to\nbe rewritten to use routine DELETE's instead of TRUNCATE. Even the\nUPDATEs which are done to process a batch for a subscriber look to be\nnon-HOT due to updating indexed fields anyway (in\npgq.next_batch_custom(), it's setting subscription.sub_batch which has a\nUNIQUE btree on it). Looks like there's a HOT UPDATE for the queue\ntable when a table swap happens, but that UPDATE wouldn't actually be\nnecessary if we'd fix the issue with just routine INSERT/DELETE leading\nto tons of dead tuples that can't be VACUUM'd if a long running\ntransaction is running, and I doubt that UPDATE was actually\nintentionally designed to take advantage of HOT, it just happened to\nwork that way.\n\nThe gist of what I'm trying to get at here is that the use-cases I've\nseen, and where people have put in the effort to work around the long\nrunning transaction vs. VACUUM issue by using hacks like table swapping\nand TRUNCATE, aren't cases where there's a lot of HOT updating happening\non the tables that are getting bloated due to VACUUM being unable to\nclean up tuples. So, if that's actually the easier thing to tackle,\nfantastic, let's do it and then figure out how to improve on it to\nhandle the more complicated cases later. (This presumes that it's\nactually possible to essentially 'skip' the hard cases and still have a\nworking implementation, of course).\n\nThanks,\n\nStephen", "msg_date": "Thu, 17 Jun 2021 11:31:30 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Jun 16, 2021 at 12:00:57PM -0400, Tom Lane wrote:\n> Greg Stark <stark@mit.edu> writes:\n> > I think Andres's point earlier is the one that stands out the most for me:\n> > \n> > > I still think that's the most reasonable course. I actually like the\n> > > feature, but I don't think a better implementation of it would share\n> > > much if any of the current infrastructure.\n> > \n> > That makes me wonder whether ripping the code out early in the v15\n> > cycle wouldn't be a better choice. It would make it easier for someone\n> > to start work on a new implementation.\n\nDeleting the feature early is better than deleting the feature late,\ncertainly. (That doesn't tell us about the relative utility of deleting the\nfeature early versus never deleting the feature.)\n\n> > Fwiw I too think the basic idea of the feature is actually awesome.\n> > There are tons of use cases where you might have one long-lived\n> > transaction working on a dedicated table (or even a schema) that will\n> > never look at the rapidly mutating tables in another schema and never\n> > trigger the error even though those tables have been vacuumed many\n> > times over during its run-time.\n> \n> I agree that's a great use-case. I don't like this implementation though.\n> I think if you want to set things up like that, you should draw a line\n> between the tables it's okay for the long transaction to touch and those\n> it isn't, and then any access to the latter should predictably draw an\n> error.\n\nI agree that would be a useful capability, but it solves a different problem.\n\n> I really do not like the idea that it might work anyway, because\n> then if you accidentally break the rule, you have an application that just\n> fails randomly ... probably only on the days when the boss wants that\n> report *now* not later.\n\nEvery site adopting SERIALIZABLE learns that transactions can fail due to\nmostly-unrelated concurrent activity. ERRCODE_SNAPSHOT_TOO_OLD is another\nkind of serialization failure, essentially. Moreover, one can opt for an\nold_snapshot_threshold value longer than the runtime of the boss's favorite\nreport. Of course, nobody would reject a replacement that has all the\nadvantages of old_snapshot_threshold and fewer transaction failures. Once\nyour feature rewrite starts taking away advantages to achieve fewer\ntransaction failures, that rewrite gets a lot more speculative.\n\nnm\n\n\n", "msg_date": "Thu, 17 Jun 2021 20:49:31 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Wed, Apr 15, 2020 at 2:21 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Apr 13, 2020 at 2:58 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > On Fri, Apr 3, 2020 at 2:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > I'm thinking of a version of \"snapshot too old\" that amounts to a\n> > > statement timeout that gets applied for xmin horizon type purposes in\n> > > the conventional way, while only showing an error to the client if and\n> > > when they access literally any buffer (though not when the relation is\n> > > a system catalog). Is it possible that something along those lines is\n> > > appreciably better than nothing to users? If it is, and if we can find\n> > > a way to manage the transition, then maybe we could tolerate\n> > > supporting this greatly simplified implementation of \"snapshot too\n> > > old\".\n\nI rebased that patch and fleshed it out just a bit more. Warning:\nexperiment grade, incomplet, inkorrect, but I think it demonstrates\nthe main elements of Peter's idea from last year.\n\nFor READ COMMITTED, the user experience is much like a statement\ntimeout, except that it can keep doing stuff that doesn't read\nnon-catalog data. Trivial example: pg_sleep(10) happily completes\nwith old_snapshot_threshold=5s, as do queries that materialise all\ntheir input data in time, and yet your xmin is zapped. For REPEATABLE\nREAD it's obviously tied to your first query, and produces \"snapshot\ntoo old\" if you repeatedly SELECT from a little table and your time\nruns out.\n\nIn this version I put the check into the heapam visibility + vismap\nchecks, instead of in the buffer access code. The reason is that the\nlower level buffer access routines don't have a snapshot, but if you\npush the check down to buffer access and just check the \"oldest\"\nsnapshot (definition in this patch, not in master), then you lose some\npotential granularity with different cursors. If you try to put it at\na higher level in places that have a snapshot and access a buffer, you\nrun into the problem of being uncertain that you've covered all the\nbases. But I may be underthinking that.\n\nQuite a few unresolved questions about catalog and toast snapshots and\nother special stuff, as well as the question of whether it's actually\nuseful or the overheads can be made cheap enough.\n\n> Hmm. I suppose it must be possible to put the LSN check back: if\n> (snapshot->too_old && PageGetLSN(page) > snapshot->lsn) ereport(...).\n> Then the granularity would be the same as today -- block level -- but\n> the complexity is transferred from the pruning side (has to deal with\n> xid time map) to the snapshot-owning side (has to deal with timers,\n> CFI() and make sure all snapshots are registered). Maybe not a great\n> deal, and maybe not easier than fixing the existing bugs.\n\nIt is a shame to lose the current LSN-based logic; it's really rather\nclever (except for being broken).\n\n> One problem is all the new setitimer() syscalls. I feel like that\n> could be improved, as could statement_timeout, by letting existing\n> timers run rather than repeatedly rescheduling eagerly, so that eg a 1\n> minute timeout never gets rescheduled more than once per minute. I\n> haven't looked into that, but I guess it's no worse than the existing\n> implement's overheads anyway.\n\nAt least that problem was fixed, by commit 09cf1d52. (Not entirely\nsure why I got away with not reenabling the timer between queries, but\nI didn't look very hard).", "msg_date": "Fri, 18 Jun 2021 16:52:43 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." }, { "msg_contents": "On Thu, 17 Jun 2021 at 23:49, Noah Misch <noah@leadboat.com> wrote:\n>\n> On Wed, Jun 16, 2021 at 12:00:57PM -0400, Tom Lane wrote:\n> > I agree that's a great use-case. I don't like this implementation though.\n> > I think if you want to set things up like that, you should draw a line\n> > between the tables it's okay for the long transaction to touch and those\n> > it isn't, and then any access to the latter should predictably draw an\n> > error.\n\n> I agree that would be a useful capability, but it solves a different problem.\n\nYeah, I think this discussion veered off into how to improve vacuum\nsnapshot tracking. That's an worthwhile endeavour but it doesn't\nreally address the use case this patch was there to target.\n\nFundamentally there's no way in SQL for users to give this information\nto Postgres. There's nothing in SQL or our API that lets a client\ninform Postgres what tables a session is going to access within a\ntransaction in the future.\n\nWhat this alternative would look like would be a command that a client\nwould have to issue at the start of every transaction listing every\ntable that transaction will be allowed to touch. Any attempt to read\nfrom any other table during the transaction would then get an error.\n\nThat sounds like it would be neat but it wouldn't work great with the\ngeneral approach in Postgres of having internal functions accessing\nrelations on demand (think of catalog tables, toast tables, and\npg_proc functions).\n\nThe \"snapshot too old\" approach is much more in line with Postgres's\ngeneral approach of giving users a general purpose platform and then\ndealing gracefully with the consequences.\n\n-- \ngreg\n\n\n", "msg_date": "Wed, 23 Jun 2021 01:21:00 -0400", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: snapshot too old issues, first around wraparound and then more." } ]
[ { "msg_contents": "Hello,\nI am running postgres 11.5 and we were having issues with shared segments.\nSo I increased the max_connection as suggested by you guys and reduced my\nwork_mem to 600M.\n\nRight now instead, it is the second time I see this error :\n\nERROR: could not resize shared memory segment \"/PostgreSQL.2137675995\" to\n33624064 bytes: Interrupted system call\n\nSo do you know what it means and how can I solve it?\n\nThanks a lot,\nNicola\n\nHello,I am running postgres 11.5 and we were having issues with shared segments. So I increased the max_connection as suggested by you guys and reduced my work_mem to 600M.Right now instead, it is the second time I see this error :ERROR:  could not resize shared memory segment \"/PostgreSQL.2137675995\" to 33624064 bytes: Interrupted system callSo do you know what it means and how can I solve it?Thanks a lot,Nicola", "msg_date": "Wed, 1 Apr 2020 14:51:07 +0200", "msg_from": "Nicola Contu <nicola.contu@gmail.com>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "I provided the subject, and added -hackers.\n\n> Hello,\n> I am running postgres 11.5 and we were having issues with shared segments.\n> So I increased the max_connection as suggested by you guys and reduced my\n> work_mem to 600M.\n> \n> Right now instead, it is the second time I see this error :\n> \n> ERROR: could not resize shared memory segment \"/PostgreSQL.2137675995\" to\n> 33624064 bytes: Interrupted system call\n\nThe function posix_fallocate is protected against EINTR.\n\n| do\n| {\n| \trc = posix_fallocate(fd, 0, size);\n| } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n\nBut not for ftruncate and write. Don't we need to protect them from\nENTRI as the attached?\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 02 Apr 2020 17:25:40 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "EINTR while resizing dsm segment." }, { "msg_contents": "On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I provided the subject, and added -hackers.\n>\n> > Hello,\n> > I am running postgres 11.5 and we were having issues with shared segments.\n> > So I increased the max_connection as suggested by you guys and reduced my\n> > work_mem to 600M.\n> >\n> > Right now instead, it is the second time I see this error :\n> >\n> > ERROR: could not resize shared memory segment \"/PostgreSQL.2137675995\" to\n> > 33624064 bytes: Interrupted system call\n>\n> The function posix_fallocate is protected against EINTR.\n>\n> | do\n> | {\n> | rc = posix_fallocate(fd, 0, size);\n> | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n>\n> But not for ftruncate and write. Don't we need to protect them from\n> ENTRI as the attached?\n\nWe don't handle EINTR for write() generally because that's not\nsupposed to be necessary on local files (local disks are not \"slow\ndevices\", and we document that if you're using something like NFS you\nshould use its \"hard\" mount option so that it behaves that way too).\nAs for ftruncate(), you'd think it'd be similar, and I can't think of\na more local filesystem than tmpfs (where POSIX shmem lives on Linux),\nbut I can't seem to figure that out from reading man pages; maybe I'm\nreading the wrong ones. Perhaps in low memory situations, an I/O wait\npath reached by ftruncate() can return EINTR here rather than entering\nD state (non-interruptable sleep) or restarting due to our SA_RESTART\nflag... anyone know?\n\nAnother thought: is there some way for the posix_fallocate() retry\nloop to exit because (ProcDiePending || QueryCancelPending), but then\nfor CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to\nreporting the EINTR?\n\n\n", "msg_date": "Sat, 4 Apr 2020 13:48:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR while resizing dsm segment." }, { "msg_contents": "So that seems to be a bug, correct?\nJust to confirm, I am not using NFS, it is directly on disk.\n\nOther than that, is there a particular option we can set in the\npostgres.conf to mitigate the issue?\n\nThanks a lot for your help.\n\n\nIl giorno sab 4 apr 2020 alle ore 02:49 Thomas Munro <thomas.munro@gmail.com>\nha scritto:\n\n> On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > I provided the subject, and added -hackers.\n> >\n> > > Hello,\n> > > I am running postgres 11.5 and we were having issues with shared\n> segments.\n> > > So I increased the max_connection as suggested by you guys and reduced\n> my\n> > > work_mem to 600M.\n> > >\n> > > Right now instead, it is the second time I see this error :\n> > >\n> > > ERROR: could not resize shared memory segment\n> \"/PostgreSQL.2137675995\" to\n> > > 33624064 bytes: Interrupted system call\n> >\n> > The function posix_fallocate is protected against EINTR.\n> >\n> > | do\n> > | {\n> > | rc = posix_fallocate(fd, 0, size);\n> > | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n> >\n> > But not for ftruncate and write. Don't we need to protect them from\n> > ENTRI as the attached?\n>\n> We don't handle EINTR for write() generally because that's not\n> supposed to be necessary on local files (local disks are not \"slow\n> devices\", and we document that if you're using something like NFS you\n> should use its \"hard\" mount option so that it behaves that way too).\n> As for ftruncate(), you'd think it'd be similar, and I can't think of\n> a more local filesystem than tmpfs (where POSIX shmem lives on Linux),\n> but I can't seem to figure that out from reading man pages; maybe I'm\n> reading the wrong ones. Perhaps in low memory situations, an I/O wait\n> path reached by ftruncate() can return EINTR here rather than entering\n> D state (non-interruptable sleep) or restarting due to our SA_RESTART\n> flag... anyone know?\n>\n> Another thought: is there some way for the posix_fallocate() retry\n> loop to exit because (ProcDiePending || QueryCancelPending), but then\n> for CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to\n> reporting the EINTR?\n>\n\nSo that seems to be a bug, correct?Just to confirm, I am not using NFS, it is directly on disk.Other than that, is there a particular option we can set in the postgres.conf to mitigate the issue?Thanks a lot for your help.Il giorno sab 4 apr 2020 alle ore 02:49 Thomas Munro <thomas.munro@gmail.com> ha scritto:On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I provided the subject, and added -hackers.\n>\n> > Hello,\n> > I am running postgres 11.5 and we were having issues with shared segments.\n> > So I increased the max_connection as suggested by you guys and reduced my\n> > work_mem to 600M.\n> >\n> > Right now instead, it is the second time I see this error :\n> >\n> > ERROR:  could not resize shared memory segment \"/PostgreSQL.2137675995\" to\n> > 33624064 bytes: Interrupted system call\n>\n> The function posix_fallocate is protected against EINTR.\n>\n> | do\n> | {\n> |       rc = posix_fallocate(fd, 0, size);\n> | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n>\n> But not for ftruncate and write. Don't we need to protect them from\n> ENTRI as the attached?\n\nWe don't handle EINTR for write() generally because that's not\nsupposed to be necessary on local files (local disks are not \"slow\ndevices\", and we document that if you're using something like NFS you\nshould use its \"hard\" mount option so that it behaves that way too).\nAs for ftruncate(), you'd think it'd be similar, and I can't think of\na more local filesystem than tmpfs (where POSIX shmem lives on Linux),\nbut I can't seem to figure that out from reading man pages; maybe I'm\nreading the wrong ones.  Perhaps in low memory situations, an I/O wait\npath reached by ftruncate() can return EINTR here rather than entering\nD state (non-interruptable sleep) or restarting due to our SA_RESTART\nflag... anyone know?\n\nAnother thought: is there some way for the posix_fallocate() retry\nloop to exit because (ProcDiePending || QueryCancelPending), but then\nfor CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to\nreporting the EINTR?", "msg_date": "Tue, 7 Apr 2020 10:58:19 +0200", "msg_from": "Nicola Contu <nicola.contu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EINTR while resizing dsm segment." }, { "msg_contents": "The only change we made on the disk, is the encryption at OS level.\nNot sure this can be something related.\n\nIl giorno mar 7 apr 2020 alle ore 10:58 Nicola Contu <nicola.contu@gmail.com>\nha scritto:\n\n> So that seems to be a bug, correct?\n> Just to confirm, I am not using NFS, it is directly on disk.\n>\n> Other than that, is there a particular option we can set in the\n> postgres.conf to mitigate the issue?\n>\n> Thanks a lot for your help.\n>\n>\n> Il giorno sab 4 apr 2020 alle ore 02:49 Thomas Munro <\n> thomas.munro@gmail.com> ha scritto:\n>\n>> On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi\n>> <horikyota.ntt@gmail.com> wrote:\n>> > I provided the subject, and added -hackers.\n>> >\n>> > > Hello,\n>> > > I am running postgres 11.5 and we were having issues with shared\n>> segments.\n>> > > So I increased the max_connection as suggested by you guys and\n>> reduced my\n>> > > work_mem to 600M.\n>> > >\n>> > > Right now instead, it is the second time I see this error :\n>> > >\n>> > > ERROR: could not resize shared memory segment\n>> \"/PostgreSQL.2137675995\" to\n>> > > 33624064 bytes: Interrupted system call\n>> >\n>> > The function posix_fallocate is protected against EINTR.\n>> >\n>> > | do\n>> > | {\n>> > | rc = posix_fallocate(fd, 0, size);\n>> > | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n>> >\n>> > But not for ftruncate and write. Don't we need to protect them from\n>> > ENTRI as the attached?\n>>\n>> We don't handle EINTR for write() generally because that's not\n>> supposed to be necessary on local files (local disks are not \"slow\n>> devices\", and we document that if you're using something like NFS you\n>> should use its \"hard\" mount option so that it behaves that way too).\n>> As for ftruncate(), you'd think it'd be similar, and I can't think of\n>> a more local filesystem than tmpfs (where POSIX shmem lives on Linux),\n>> but I can't seem to figure that out from reading man pages; maybe I'm\n>> reading the wrong ones. Perhaps in low memory situations, an I/O wait\n>> path reached by ftruncate() can return EINTR here rather than entering\n>> D state (non-interruptable sleep) or restarting due to our SA_RESTART\n>> flag... anyone know?\n>>\n>> Another thought: is there some way for the posix_fallocate() retry\n>> loop to exit because (ProcDiePending || QueryCancelPending), but then\n>> for CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to\n>> reporting the EINTR?\n>>\n>\n\nThe only change we made on the disk, is the encryption at OS level.Not sure this can be something related.Il giorno mar 7 apr 2020 alle ore 10:58 Nicola Contu <nicola.contu@gmail.com> ha scritto:So that seems to be a bug, correct?Just to confirm, I am not using NFS, it is directly on disk.Other than that, is there a particular option we can set in the postgres.conf to mitigate the issue?Thanks a lot for your help.Il giorno sab 4 apr 2020 alle ore 02:49 Thomas Munro <thomas.munro@gmail.com> ha scritto:On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> I provided the subject, and added -hackers.\n>\n> > Hello,\n> > I am running postgres 11.5 and we were having issues with shared segments.\n> > So I increased the max_connection as suggested by you guys and reduced my\n> > work_mem to 600M.\n> >\n> > Right now instead, it is the second time I see this error :\n> >\n> > ERROR:  could not resize shared memory segment \"/PostgreSQL.2137675995\" to\n> > 33624064 bytes: Interrupted system call\n>\n> The function posix_fallocate is protected against EINTR.\n>\n> | do\n> | {\n> |       rc = posix_fallocate(fd, 0, size);\n> | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));\n>\n> But not for ftruncate and write. Don't we need to protect them from\n> ENTRI as the attached?\n\nWe don't handle EINTR for write() generally because that's not\nsupposed to be necessary on local files (local disks are not \"slow\ndevices\", and we document that if you're using something like NFS you\nshould use its \"hard\" mount option so that it behaves that way too).\nAs for ftruncate(), you'd think it'd be similar, and I can't think of\na more local filesystem than tmpfs (where POSIX shmem lives on Linux),\nbut I can't seem to figure that out from reading man pages; maybe I'm\nreading the wrong ones.  Perhaps in low memory situations, an I/O wait\npath reached by ftruncate() can return EINTR here rather than entering\nD state (non-interruptable sleep) or restarting due to our SA_RESTART\nflag... anyone know?\n\nAnother thought: is there some way for the posix_fallocate() retry\nloop to exit because (ProcDiePending || QueryCancelPending), but then\nfor CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to\nreporting the EINTR?", "msg_date": "Tue, 7 Apr 2020 12:25:36 +0200", "msg_from": "Nicola Contu <nicola.contu@gmail.com>", "msg_from_op": true, "msg_subject": "Re: EINTR while resizing dsm segment." }, { "msg_contents": "On Tue, Apr 7, 2020 at 8:58 PM Nicola Contu <nicola.contu@gmail.com> wrote:\n> So that seems to be a bug, correct?\n> Just to confirm, I am not using NFS, it is directly on disk.\n>\n> Other than that, is there a particular option we can set in the postgres.conf to mitigate the issue?\n\nHi Nicola,\n\nYeah, I think it's a bug. We're not sure exactly where yet.\n\n\n", "msg_date": "Tue, 7 Apr 2020 22:25:48 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: EINTR while resizing dsm segment." } ]